ACSOS in Practice Talks
We are proud to include four high-profile ACSOS in Practice talks into our program:
- Krishna Janakiraman (PlateIQ) — Automating Invoice Processing at scale at PlateIQ
- Mudhakar Srivatsa (IBM T. J. Watson Research) — Foundation Models at the Edge
- Thanikesavan Sivanthi (ABB Corporate Research) — Autonomy in Industrial Systems: Are we there yet?
- Rui Han (Beijing Institute of Technology) — Key Research Challenges for Improving Deep Vision Applications at Edge
Krishna Janakiraman (PlateIQ) — Automating Invoice Processing at scale at PlateIQ
Abstract: We present Parsley, the framework we use at Plate IQ to process and automate more than 2M+ invoices and over $2B+ transaction volume a month. Plate IQ is a leader in the Accounts Payable automation industry. Accounts Payable automation is a global multi-billion dollar industry that is transforming how businesses operate and is essential for businesses to streamline their operations, reduce costs and be more efficient. Automating Invoice Processing is a key fundamental component that is required for automating accounting workflows and doing it at scale has been an incredible challenge for the industry.
Parsley uses a human in the loop AI approach towards Invoice Automation and is able to process invoices at scale at a very low cost with a very high accuracy. We use state of the art Deep Learning models on Invoice images and text to extract structured information at both Invoice header and line item level. Using validation models that use statistics and domain heuristics we select a fraction of the model predictions that require human review. Our review tools are designed to empower the human reviewer to only focus on data points that require intervention and correction, thereby maximizing the human throughput. The combined effect of our Deep Learning models and Human review system dramatically reduces the cost of processing an invoice to less than one-tenth of the industry average while maintaining a < 3% error rate.
Mudhakar Srivatsa (IBM T. J. Watson Research) — Foundation Models at the Edge
Abstract: Foundation models, starting with BERT for natural language processing, have vastly outperformed prior approaches in the last few years. These models exploit vast volumes of unlabeled data using self-supervision and produce base models that can be adapted to a wide range of downstream tasks. More recently, their adoption in non-NLP domains, particularly, remote sensing data has received significant attention. This talk will revisit AI/ML model lifecycle with a foundation model lens and present key opportunities and challenges in edge deployments (e.g., Kubernetes cluster in space). Such edge deployments introduce at least two types of challenges: (i) resource constraint at the infrastructure layer, and (ii) lack of human supervision at the data and AI layer.
These problems are seemingly exacerbated due to foundation models, which are generally larger than their predecessors (statistical or deep learning models). However, foundation models also create new opportunities which can make them more attractive for operationalization. The ability to learn common representation across multiple downstream tasks allows downstream tasks to share common subnets (reducing memory footprint and model loading time). Representations learnt by foundation models become a key input to data retention (e.g., unusual embedding vector => higher retention) and data selection for human supervision such as labeling and model re-training (e.g., select diverse embedding vectors for human supervision). This talk will outline some of these challenges and opportunities in operationalization of foundation models at the Edge.
Biography: Mudhakar Srivatsa is a distinguished research staff member at the Hybrid Cloud department in IBM T. J. Watson Research Center. His work is focussed on cloud-native scaling of AI/ML workloads with applications to large scale spatial and time series data. He has led the deployment of AI-assisted solutions for air traffic control, IT operations, combating piracy in the maritime domain, and public safety in dense urban environments such as stadiums and music festivals.
Thanikesavan Sivanthi (ABB Corporate Research) — Autonomy in Industrial Systems: Are we there yet?
Abstract: The field of autonomous systems has advanced so rapidly over the past few years that it is ahead of what is happening in the field of automation for industrial systems. Today’s automation technology for industrial systems has reached a level of autonomy somewhere between “occasional” and “limited”. To advance to the next levels of autonomy across the entire lifecycle, the architecture and design of tomorrow’s autonomous industrial systems will need to support autonomous engineering, operation, and maintenance. This talk will make an effort to define the levels of autonomy for industrial systems, to outline a path towards achieving full autonomy in industrial systems, and to present the challenges and future directions of research in this domain.
Biography: Thanikesavan Sivanthi is Senior Principal Scientist at ABB Research Switzerland. His areas of interest in research include real-time systems, software architecture, software testing, and applied AI. He received his master’s in information and communication systems from the Hamburg University of Technology, Germany, where he also completed his doctorate studies. His doctoral research focused on optimal design of distributed real-time embedded systems. He carried on with his post-doctoral research in wireless communication system design for in-flight entertainment. He then shifted his focus to industrial research joining ABB Corporate Research Center, Switzerland, in 2009, where he is now conducting research on system architecture and efficient engineering solutions for industrial and power automation.
Rui Han (Beijing Institute of Technology) — Key Research Challenges for Improving Deep Vision Applications at Edge
Abstract: With the rapid development and prosperity of edge computing and cloud-edge collaborative platforms, the techniques of artificial intelligence (AI) are coming into our daily life. In particular, Computer Vision (CV) are a prevalent type of AI applications such as autonomous vehicles, security monitoring, and anomaly detection. When running CV applications on resource-constrained edge devices, it is critical to effectively improve the model performance and accuracy while protecting data privacy. This talk discusses the problems and challenges of such edge intelligence scenarios, studies how to improve the model training, inference, and adaption performances of CV applications, and reports latest and representative techniquenies in this area.
Biography: Dr. Rui Han is an associate professor and PhD supervisor at the School of Computer Science & Technology, Beijing Institute of Technology (BIT). Before joining BIT in 2014, He received MSc with honor in 2010 from Tsinghua University, China, and obtained his PhD degree in 2014 from the Department of Computing, Imperial College London, UK. His research interests are cloud and edge computing, big data systems, and system optimization for highly parallel workloads (in particular big data analytics and deep learning applications). He has over 50 publications in these areas, including papers at ACM MobiCom (Best community paper running-up awards), TC, TPDS, TKDE, TDSC, INFOCOM, ICDCS, ICPP, CCGrid, and CLOUD. He also acts as the technical advisor in a list of companies, including ThunderSoft (Edge computing), SudoPrivacy (Data privacy), and Transwarp (Big data systems), and closely collaborates with companies such as Tencent, Midea, and CASICloud.