Macro Trends in the Technology Industry, March 2022 – iTWire

1

Macro Trends in the Technology Industry, March 2022 – iTWire

As we put together the Radar, we have a ton of interesting and enlightening conversations discussing the context of the ‘blips’ but not all this extra information fits into the radar format.

These “macro trends” articles allow us to add a bit of flavor and to zoom out and see the wider picture of what’s happening in the tech industry.

The ongoing tension between client and server-based logic
Long industry cycles tend to cause us to pendulum back and forth between a ‘client’ and ‘server’ emphasis for our logic. In the mainframe era we had centralised computing and simple terminals so all the logic — including where to move the cursor! — was handled by the server. Then came Windows and desktop apps which pushed more logic and functionality into the clients, with “two-tier” applications using a server mostly as a data store and with all the logic happening in the client. Early in the life of the internet, web pages were mostly just rendered by web browsers with little logic running in the browser and most of the action happening on the server. Now with web 2.0 and mobile and edge computing, logic is again moving into the clients.

Web Analytics



On this edition of the radar a couple of blips are related to this ongoing tension. Server-driven UI is a technique that allows mobile apps to evolve somewhat in between client code updates, by allowing the server to specify the kinds of UI controls used to render a server response. TinyML allows larger machine learning models to be run on cheap, resource-constrained devices, potentially allowing us to push ML to the extreme edges of the network.

The take-away here is not that there’s some new ‘right’ way of structuring a system’s logic and data, rather that it’s an ongoing tradeoff that we need to constantly evaluate. As devices, cloud platforms, networks and ‘middle’ servers gain capabilities, these tradeoffs will change and teams should be ready to reconsider the architecture they have chosen.

“Gravitational” software
While working on the radar we often discuss things that we see going badly in the industry. A common theme is over-use of a good tool, to the point where it becomes harmful, or of using a specific kind of component beyond the margins in which it’s really applicable. Specifically, we see a lot of teams over-using Kubernetes — “Kubernetes all the things!” — when it isn’t a silver bullet and won’t solve all our problems. We’ve also seen API gateways abused to fix problems with a back-end API, rather than fixing the problem directly.

We think that the “gravity” of software is an explanation for these antipatterns. This is the tendency for teams to find a center of gravity for behavior, logic, orchestration and so on, where it’s easier or more convenient to just continue to add more and more functionality, until that component becomes the center of a team’s universe. Difficulties in approving or provisioning alternatives can further lead to inertia around these pervasive system components.

The industry’s changing relationship to open source
The impact of open source software on the world has been profound. Linux, started by a young programmer who couldn’t afford a commercial Unix system but had the skills to create one, has grown to be one of the most used operating systems of our time. All the top 500 supercomputers run on Linux, and 90% of cloud infrastructure uses it. From operating systems to mobile frameworks to data analytics platforms and utility libraries, open source is a daily part of life as a modern software engineer. But as industry — and society at large — has been discovering, some very important open source software has a bit of a shaky foundation.

“It takes nerves of steel to work for many years on hundreds of thousands of lines of very complex code, with every line of code you touch visible to the world, knowing that code is used by banks, firewalls, weapons systems, web sites, smart phones, industry, government, everywhere. Knowing that you’ll be ignored and unappreciated until something goes wrong,” comments OpenSSL Foundation founder Steve Marquess.

Heartbleed was a bug in OpenSSL, a library used to secure communication between web servers and browsers. The bug allowed attackers to steal a server’s private keys and hijack user’s session cookies and passwords. The bug was described as ‘catastrophic’ by experts, and affected about 17% of the internet’s secure web servers. The maintainers of OpenSSL patched the problem less than a week after it was reported, but remediation also required certificate authorities to reissue hundreds of thousands of compromised certificates. In the aftermath of the incident it turned out that OpenSSL, a security-critical library containing over 500,000 lines of code, was maintained by just two people.

Log4Shell was a recent problem with the widely-used Log4j logging library. The bug enabled remote access to systems and again was described in apocalyptic terms by security experts. Despite the problem being reported to maintainers, no fix was forthcoming for approximately two weeks, until the bug had started to be exploited in the wild by hackers. A fix was hurriedly pushed out, but left part of the vulnerability unfixed, and two further patches were required to fully resolve all the problems. In all, more than three weeks elapsed between the initial report and Log4j actually having a fully secure version available.

It’s it’s important to be very clear that we are not criticizing the OpenSSL and Log4j maintenance teams. In the case of Log4j, it’s a volunteer group who worked very hard to secure their software and gave up evenings and weekends for no pay and who had to endure barbed comments and angry Tweets while fixing a problem with an obscure Log4j feature that no person in their right mind would actually want to use but only existed for backwards-compatibility reasons. The point remains, though: open source software is increasingly critical to the world but has widely varying models behind its creation and maintenance.

Open source exists between two extremes. Companies like Google, Netflix, Facebook and Alibaba release open source software which they create internally, fund its continued development, and promote it strongly. We’d call this “professional open source” and the benefit to those big companies is largely about recruitment — they’re putting software out there with the implication that programmers can join them and work on cool stuff like that. At the other end of the spectrum there is open source created by one person as a passion project. They’re creating software to scratch a personal itch, or because they believe a particular piece of software can be beneficial to others. There’s no commercial model behind this kind of software, no-one is being paid to do it, but the software exists because a handful of people are passionate about it. In between these two extremes are things like Apache Foundation supported projects, which may have some degree of legal or administrative support, and a larger group of maintainers than the small projects, and “commercialized open source” where the software itself is free but scaling and support services are a paid addon.

This is a complex landscape. At Thoughtworks, we use and advocate for a lot of open source software. We’d love to see it better funded but, perversely, adding explicit funding to some of the passion projects might be counterproductive — if you work on something for fun because you believe in it, that motivation might go away if you were being paid and it became a job. We don’t think there’s an easy answer but we do think that large companies leveraging open source should think deeply about how they can give back and support the open source community, and they should consider how well supported something is before taking it on. The great thing about open source is that anyone can improve the code, so if you’re using the code, also consider whether you can fix or improve it too.

Securing the software supply chain
Historically there’s been a lot of emphasis on the security of software once it’s running in production—is the server secure and patched, does the application have any SQL injection holes or cross-site scripting bugs that could be exploited to crack into it? But attackers have become increasingly sophisticated and are beginning to attack the entire “path to production” for systems, which includes everything from source-control to continuous delivery servers. If an attacker can subvert the process at any point in this path, they can change the code and intentionally introduce weaknesses or back doors and thus compromise the running systems, even if the final server on which it’s running is very well secured.

The recent exploit for Log4j, which we mentioned in the previous section on open source, shows another vulnerability in the path to production. Software is generally built using a combination of from-scratch code specific to the business problem at hand, as well as library or utility code that solves an ancillary problem and can be reused in order to speed up delivery. Log4Shell was a vulnerability in Log4j, so anyone who had used that library was potentially vulnerable (and given that Log4j has been around for more than a decade, that could be a lot of systems). Now the problem became figuring out whether software included Log4j, and if so which version of it. Without automated tools, this is an arduous process, especially when the typical large enterprise has thousands of pieces of software deployed.

The industry is waking up to this problem, and we previously noted that even the US White House has called out the need to secure the software “supply chain.” Borrowing another term from manufacturing, a US executive order directs the IT industry to establish a software “bill of materials” (SBOM) that details all of the component software that has gone into a system. With tools to automatically create an SBOM, and other tools to match vulnerabilities against an SBOM, the problem of determining whether a system contains a vulnerable version of Log4J is reduced to a simple query and a few seconds of processing time. Teams can also look to Supply chain Levels for Software Artifacts (SLSA, pronounced ‘salsa’) for guidance and checklists.

Suggested Thoughtworks podcast: Securing the software supply chain

The demise of standalone pipeline tools
“Demise” is certainly a little hyperbolic, but the radar group found ourselves talking a lot about Github Actions, Gitlab CI/CD, and Azure Pipelines where all the pipeline tools are subsumed into either the repo or hosting environment. Couple that with the previously-observed tendency for teams to use the default tool in their ecosystem (Github, Azure, AWS, etc) rather than looking at the best tool, technique or platform to suit their needs, and some of the standalone pipeline tools might be facing a struggle. We’ve continued to feature ‘standalone’ pipeline tools such as CircleCI but even our internal review cycle revealed some strong opinions, with one person claiming that Github Actions did everything they needed and teams shouldn’t use a standalone tool. Our advice here is to consider both ‘default’ and standalone pipeline tools and to evaluate them on their merits, which include both features and ease of integration.

SQL remains the dominant ETL language
We’re not necessarily saying this is a good thing, but the venerable Structured Query Language remains the tool the industry most often reaches for when there’s a need to query or transform data. Apparently, no matter how advanced our tooling or platforms are, SQL is the common denominator chosen for data manipulation. A good example is the preponderance of streaming data platforms that allow SQL queries over their state, or use SQL to build up a picture of the in-flight data stream, for example ksqlDB.

SQL has the advantage of having been around since the 1970s, with most programmers having used it at some point. That’s also a significant disadvantage — many of us learnt just enough SQL to be dangerous, rather than competent. But with additional tooling, SQL can be tamed, tested, efficient and reliable. We particularly like dbt, a data transformation tool with an excellent SQL editor, and SQLfluff, a linter that helps detect errors in SQL code.

The neverending quest for the master data catalogue
A continuing theme in the industry is the importance and latent value of corporate data, with more use cases arising that can take advantage of this data, coupled with interesting and unexpected new capabilities arising from machine learning and artificial intelligence. But for as long as companies have been collecting data, there have been efforts to categorise and catalogue the data and to merge and transform it into a unified format, in order to make it more accessible, more reusable, and to generally ‘unlock’ the value inherent in the data.

Strategy for unlocking data often involves creating what’s called a “master data catalogue” — a top-down, single corporate directory of all data across the organisation. There are ever more fancy tools for attempting such a feat, but they consistently run into the hard reality that data is complex, ambiguous, duplicated, and even contradictory. Recently the Radar has included a number of proposals for data catalogue tools, such as Collibra.

But at the same time, there is a growing industry trend away from centralized data definitions and towards decentralised data management through techniques such as data mesh. This approach embraces the inherent complexity of corporate data by segregating data ownership and discovery along business domain lines. When data products are decentralised and controlled by independent, domain-oriented teams, the resulting data catalogues are simpler and easier to maintain. Additionally, breaking down the problem this way reduces the need for complex data catalogue tools and master data management platforms. So although the industry continues to strive for an answer to ‘the’ master data catalogue problem, we think it’s likely the wrong question and that smaller decentralised catalogs are the answer.

That’s all for this edition of Macro Trends. Thanks for reading and be sure to tune in next time for more industry commentary. Many thanks to Brandon Byars, George Earle, and Lakshminarasimhan Sudarshan for their helpful comments.

Adblock test

Published at Mon, 11 Apr 2022 00:21:24 +0000

Artificial Intelligence and Machine Learning Market Worldwide Growth, Share, Industry …

Market Size And Forecast

New Jersey, USA,- The global Artificial Intelligence and Machine Learning Market is comprehensively and in-depth examined in the report, focusing on the competitive landscape, regional growth, market segmentation and market dynamics. For the preparation of this comprehensive research study, we have used the latest primary and secondary research techniques. The report provides Porter’s five forces analysis, tappet analysis, competitive analysis, manufacturing cost analysis, sales and production analysis, and various other types of analysis to provide a complete overview of the global Artificial Intelligence and Machine Learning market. Each segment of the global Artificial Intelligence and Machine Learning market is carefully analyzed on the basis of market share, CAGR and other important factors. The global Artificial Intelligence and Machine Learning market is also presented statistically with the help of annual growth, CAGR, sales, production and other important calculations.

We can customize the report to your liking. Our analysts are experts in Artificial Intelligence and Machine Learning market research and analysis and have in-depth experience in customizing reports, having served tons of clients so far. The main objective of the preparation of the research study is to inform you about future challenges and opportunities of the market. The report is one of the best resources you can use to secure a strong position in the global Artificial Intelligence and Machine Learning market.

Get | Download Sample Copy with TOC, Graphs & List of Figures@ https://www.marketresearchintellect.com/download-sample/?rid=292536

Our report contains current and latest market trends, market shares of companies, market forecasts, competition benchmarking, competition mapping and an in-depth analysis of the most important sustainability tactics and their impact on market growth and competition. To estimate quantitative aspects and segment the global Artificial Intelligence and Machine Learning market, we used a recommended combination of top-down and bottom-up approaches. We examined the global Artificial Intelligence and Machine Learning market from three key perspectives through data triangulation. Our iterative and comprehensive research methodology helps us to provide the most accurate market forecasts and estimates with minimal errors.

The major players covered in Artificial Intelligence and Machine Learning Markets:

  • AIBrain
  • Amazon
  • Anki
  • CloudMinds
  • Deepmind
  • Google
  • Facebook
  • IBM
  • Iris AI
  • Apple
  • Luminoso
  • Qualcomm

Artificial Intelligence and Machine Learning Market Breakdown by Type:

  • Deep Learning
  • Natural Language Processing
  • Others

Artificial Intelligence and Machine Learning Market breakdown by application:

  • Healthcare
  • BFSI
  • Law
  • Retail
  • Advertising & Media
  • Automotive & Transportation
  • Agriculture
  • Manufacturing

As part of our quantitative analysis, we have provided regional market forecasts by type and application, market sales forecasts and estimates by type, application and region by 2030, and global sales and production forecasts and estimates for Artificial Intelligence and Machine Learning by 2030. For the qualitative analysis, we focused on political and regulatory scenarios, component benchmarking, technology landscape, important market topics as well as industry landscape and trends.

We have also focused on technological lead, profitability, company size, company valuation in relation to the industry and analysis of products and applications in relation to market growth and market share.

Get | Discount On The Purchase Of This Report @ https://www.marketresearchintellect.com/ask-for-discount/?rid=292536

Artificial Intelligence and Machine Learning Market Report Scope 

Report Attribute Details
Market size available for years 2022 – 2030
Base year considered 2021
Historical data 2018 – 2021
Forecast Period 2022 – 2030
Quantitative units Revenue in USD million and CAGR from 2022 to 2030
Segments Covered Types, Applications, End-Users, and more.
Report Coverage Revenue Forecast, Company Ranking, Competitive Landscape, Growth Factors, and Trends
Regional Scope North America, Europe, Asia Pacific, Latin America, Middle East and Africa
Customization scope Free report customization (equivalent up to 8 analysts working days) with purchase. Addition or alteration to country, regional & segment scope.
Pricing and purchase options Avail of customized purchase options to meet your exact research needs. Explore purchase options

Regional market analysis Artificial Intelligence and Machine Learning can be represented as follows:

This part of the report assesses key regional and country-level markets on the basis of market size by type and application, key players, and market forecast. 

The base of geography, the world market of Artificial Intelligence and Machine Learning has segmented as follows:

    • North America includes the United States, Canada, and Mexico
    • Europe includes Germany, France, UK, Italy, Spain
    • South America includes Colombia, Argentina, Nigeria, and Chile
    • The Asia Pacific includes Japan, China, Korea, India, Saudi Arabia, and Southeast Asia

For More Information or Query or Customization Before Buying, Visit @ https://www.marketresearchintellect.com/product/global-artificial-intelligence-and-machine-learning-market-size-and-forecast/ 

About Us: Market Research Intellect

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations in addition to the objective of delivering customized and in-depth research studies.We speak to looking logical research solutions, custom consulting, and in-severity data analysis lid a range of industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverages. Etc Our research studies assist our clients to make higher data-driven decisions, admit push forecasts, capitalize coarsely with opportunities and optimize efficiency by bustling as their belt in crime to adopt accurate and indispensable mention without compromise.Having serviced on the pinnacle of 5000+ clients, we have provided expertly-behaved assert research facilities to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony, and Hitachi.

Contact us:
Mr. Edwyne Fernandes
US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768

Website: –https://www.marketresearchintellect.com/

Adblock test

Published at Sun, 10 Apr 2022 23:55:37 +0000