I wanted to share the final stage of the AGI structure I’ve been researching on for a long time.
When we think about the Artificial Intelligence habitat, autonomous structures that can take actions on their own are gaining more importance, moving away from standard input-output approaches. This is expected to be a turning point in the future.
Just like humans, as learned capabilities increase, the scope of influence expands. Within this scope, we refer to the learned areas as “tools.” In other words, as the capabilities and number of embedded tools increase, the capacity expands, and the standard AI approach evolves towards AGI.
There’s another interesting point:
Although it’s still in the development stage, I wanted to share the set of “tools” in its final workable form:
python-test/src/create_repo_on_org_github.py at main · dialoget/python-test
[Creating Modular Code with No Dependencies | Toptal®](https://www.toptal.com/software/creating-modular-code-with-no-dependencies) |
Autonolas Developer Documentation Open Autonomy is a framework for the creation of agent services: off-chain autonomous services which run as a multi-agent-system (MAS) and offer enhanced functionalities on-chain. Agent services expand the range of operations that traditional smart contracts offer, making it possible to execute arbitrarily complex operations (such as machine-learning algorithms). Most importantly, agent services are decentralized, trust-minimized, transparent, and robust.
[An Autonomous Service-Oriented Orchestration Framework for Software Defined Mobile Networks - IEEE Conference Publication | IEEE Xplore](https://ieeexplore.ieee.org/document/8685919) |
Next-Gen Autonomous System Design Made Easier with DDS and ROS
Topology Architecture Overview — Topology Framework 1.9.15 documentation
A modular automation framework is a automation development approach that divides the automation process into smaller, independent modules. These modules can then be reused and combined in different ways to create automated tests for a variety of applications and systems.
Benefits of using a MAF
Twin
Requirements, Solutions, Tools
Privately share all the resources online to allow the collaborators to work from everywhere at any time., Web platform, GitLab
Put the source code under revision control to allow safe individual experimentation by branching the code without breaking the working solution., Revision control, Git
Share a common environment to make sure that all collaborators use the same libraries and dependencies, making abstraction of the local system hardware and environment, either virtual or not., System containerization, Docker
Store online all the large binaries, such as datasets and trained models, that are required at runtime and automatically download them when necessary., Cloud storage, OwnCloud
Use an automated setup so that new collaborators can integrate the team quickly without prior knowledge., Shell scripting, Bash
Implement new features with modules connected through a common interface, hence making abstraction of the communication between them., Robotics middleware, ROS [26]
Develop complex robot behaviors visually to prioritize code reusability and failure recovery during service tasks., State machine, FlexBE [22]
Automatically compile the solution every time the source code is modified to monitor and detect compilation and/or environment issues as early as possible., Continuous integration, GitLab Runner
Provide general unit tests and/or simulations to detect runtime logic errors as much as possible., Unattended simulation, Gazebo [27]
Periodically deploy the successfully tested code to serve as a new base between all collaborators., Container registry, GitLab Registry
Allow large-scale simulation deployment on multiple machines, either virtual or not, by making abstraction of the underlying hardware and cloud infrastructure., Container orchestration, Kubernetes
Produce and maintain all the documentation by collaborative editing in a centralized location., Wiki engine, GitLab Wiki
Overview of the goals of the Software Development Environment (SDE).
Overview of the containerized SDE workflow.
reset-catkin-workspace.bash
moduletool/ROS2Docker: This repository is ros2 docker system
Czy również spotykasz się ze ścianą w rozwoju biznesu w modelu SaaS?
Dyskusja z klientem to narzędzie co prawda niezbyt wyrafinowane, ale w większości wypadków się sprawdza, jednak wyobraźmy sobie sytuację, gdzie klient zamawia oprogramowanie a wszelkie wskazówki są uwzględniane bezpośrednio przez usługę, która jest częścią autonomicznego systemu. Korekcja rozwiązania następuje przez wskazówki klientów i analizę działania przez sam framework, który generuje usługę.
#AutonomousOperationsFramework https://www.workato.com/accelerators/autonomous_operations_framework
Rozwiązaniem może być AutonomousOperationFramework, czyli w skrócie: decorator + logi jako obiekty w bazie danych z analizą wejść wyjść. Nie musisz zatrudniać ludzi, kwestia jak podchodzisz do SDLC. Programowanie powinno być procesem wytwórczym algorytmu ze sprzężeniem zwrotnym na bazie modułów lub generowane poprzez ML na bazie specyfikacji i dopracowywane poprzez logi/debugging.
Nasze rozwiązanie nie tylko przyśpiesza wytwarzanie oprogramowania, ale zwiększa jakość, modułowosć, reużycie w przyszłych projektach już przetestowanych modułach
Framework ModuleTool to kod hipermodularny, HTML jest strukturą, którą można kontrolowac, bo JS również jest pisany pod te zdarzenia. Istotą jest generowanie frontendu i backendu z jednej dokumentacji, aby była konsystentna, aby oczekiwania były spójne i możliwe do walidacji. Kwestia wybrania odpowiednich zespołów modułów w modularnej sieci kodu, by same z siebie były już zwalidowane.
ten projekt powstał jako rozwiązanie braku czasu ani zasobów na prototypowanie. Trzeba też utrzymać wzrost bez dodatkowych kosztów z łatwą integracją. DLatego najlepiej zamiast używać rozproszonej infrastruktury mieć skonsolidowane usługi na jednej maszynie która wykonuje wszystkie potrzebne procesy nawet na laptopie.
Dziś każdy z nas może tworzyć swój One Person SaaS. Od zera, na bazie tylko integracji z kilkoma usługami porzez klucze API można połączyć framework z repozytorium git i API openAI, by otrzymywać samoaktualizujące się oprogrmoawnaie, które doskonali się na bazie własnych błędów.
The GenAI surprised us last year!
We need more tools, you can call them:
Such component management tools are more important than the knowledge of hypermodularity. Here is a tool which bring us the power of serving open and hypermodular architecture
ModuleTool is supporting creating and managing autonomous or automated services in a system architecture with a focus on modularity, scalability, and self-management.
The concept of autonomous services is associated with microservices architectures where different services operate independently and possibly with autonomous decision-making abilities. In such architectures, services communicate with each other through well-defined interfaces, and each service is responsible for a specific piece of functionality within the larger application.
Examples of tools and platforms that offer similar functionality to what you might expect in an Autonomous Services Framework include:
mermaid diagram for the Autonomous Services Framework called module tool is using decorator on each component to save logs to DB and observe how works the software, how long it take to start component. Each component is a defined by sentence generated function which has input arguments and returned objects, all of them are directly saved in DB as serialized objects or json objects to check the results later on and check if the function is working properly,
sequenceDiagram
participant Client
participant Decorator
participant Component
participant Database
Note over Client,Component: Client invokes a component function
Client->>Decorator: call(functionArgs)
Note over Decorator: Decorator intercepts the call
Decorator->>Database: Save initial log with timestamp
Decorator->>Component: Invoke real function(functionArgs)
Component-->>Decorator: Return result
Note over Component: Function execution takes place
Decorator->>Database: Serialize and save result (JSON)
Decorator->>Database: Save end log with timestamp
Note over Decorator: Calculate execution time
Decorator-->>Client: Return result
Layers of software:
classDiagram
class Environment {
+Docker
}
class Component {
+Function()
}
class Data {
+Input
+Output
+Logs
}
class Automation {
+SDLC Processes
}
Component --|> Environment: Runs within
Data --|> Component: Used by
Automation --|> Data: Manages
And here’s a rough interpretation of what each class represents:
In the diagram, the connector lines represent the following relationships:
during the service is running, components decorator is saving the logs to DB with variables
In this diagram:
The sequence of actions is as follows:
a prototyping framework for component-based software development plays a crucial role in enabling fast iteration, simulation, and testing. Here are some examples of prototyping frameworks and supporting tools that could be used for this purpose:
Robot Operating System (ROS): ROS is an open-source robotic middleware that offers a collection of tools, libraries, and conventions for simplifying the task of creating complex and robust robot behavior across a wide variety of platforms. ROS components are organized into packages, and each package can contain nodes, libraries, and drivers, making it a powerful prototyping tool for autonomous systems.
Gazebo: Often used in conjunction with ROS, Gazebo is a robotics simulator that provides an excellent virtual environment to prototype, design, and test autonomous robots in various conditions without the inherent risk and cost associated with physical prototyping.
Microsoft Robotic Developer Studio (MRDS): MRDS offers an integrated, end-to-end development environment for building and testing robot applications. It includes a set of tools that support rapid prototyping through the use of a realistic 3D simulation environment.
Webots: Webots is another open-source robot simulator that provides a complete development environment to model, program, and simulate robots. With a wide variety of virtual sensors and actuators, it’s suitable for prototyping component-based autonomous systems.
Unity + ML-Agents: Unity, primarily known for game development, offers a rich 3D simulation environment that can be leveraged to prototype autonomous systems. ML-Agents, a Unity plugin, enables machine learning agents to be trained in realistic scenarios, making it suitable for AI-driven autonomous system components.
CoppeliaSim (formerly V-REP): CoppeliaSim is a robot simulation software with an integrated development environment that supports various types of sensors and actuators, capable of simulating complex algorithms and robotic behavior for prototyping purposes.
MATLAB and Simulink: Widely used for control systems design and simulation, these tools offer a block-diagram environment for multi-domain simulation and Model-Based Design. They are powerful for prototyping and testing the control algorithms of autonomous systems.
PX4 Autopilot Software: PX4 provides an open-source flight control software for drones and other unmanned vehicles. It has a modular design with numerous prebuilt components, making it suitable for the rapid prototyping of autonomous aerial systems.
Apollo: An open autonomous driving platform by Baidu, Apollo provides a comprehensive, flexible, and secure platform that includes a full set of innovative features tailored for autonomous vehicles. The platform is modular and scalable, allowing for rapid iteration and prototyping of autonomous driving components.
AutoWare: AutoWare is an all-in-one open-source software for autonomous driving. It is based on ROS and designed to implement all the necessary components for urban autonomous driving. It allows for prototyping of various aspects of self-driving systems from perception to control.
The tool is about network of code (Service-based components), based on git versioned code will help to manage the level of resuabuility, which is depended by:
The network of code need the service-based component management tools which will provide the ability to view, install and register components according to a model-based approach. Moreover, in practice, reuse is not a binary concept: there is a need to control and administer levels of reuse.
Creating a service-based component management tool that integrates with versioned code repositories, such as those managed by Git, and aids in measuring code reusability across various levels, such as class, file, protocol, hardware virtualization, and network topology, would require some sophisticated features. Here is an outline of a solution approach that might help in building such a tool:
Version Control Integration: The tool would need to integrate seamlessly with Git or other version control systems to track changes in code and components over time.
Reusability Metrics and Analysis: It should be able to analyze code to determine the reusability of various components. It could use metrics like the number of times a class or function is reused, the coupling between components, and other established software engineering metrics.
Dependency Mapping: The solution would employ dependency mapping and visualization to understand the relationship between different components and services. This could extend to understanding the implications on network topology as well.
Code Scanning and Cataloging: Automated scanning and cataloging of the repository to identify reusable code components. Each class, file, and protocol could be tagged with metadata to facilitate searching and filtering based on reusability factors.
Hardware Virtualization and Network Topology Tools: Integration with tools and platforms that manage hardware virtualization and network topology, like VMware, OpenStack, or Cisco’s network management tools, could provide insights into how reusability is affected by hardware or network changes.
Documentation and Reporting: The tool should generate documentation and reports that provide developers and teams with insights into the levels of reusability within their projects.
Governance and Compliance: Ensure that the management tool supports compliance with industry standards and best practices for code reusability and maintainability.
User Interface (UI): A user-friendly UI that allows developers and managers to interact with the management tool effectively, providing quick access to various metrics and analyses, and a clear visualization of component relationships and network dependencies.
APIs and Extensibility: An API layer that allows for the tool’s integration with other systems and extensibility so that it can accommodate future requirements, such as new metrics for reusability or changing standards for service-based architecture.
Collaboration and Workflow Support: Features that facilitate communication and collaboration within development teams, with built-in workflows that support code review, component sharing, and reuse.
Automation and CI/CD Integration: To fit into modern DevOps practices, the tool should integrate with existing continuous integration and continuous deployment pipelines, automating the assessment of reusability as part of the CI/CD process.
By focusing on these key aspects, a service-based component management tool can provide a comprehensive overview and management of a codebase’s reusability at various abstraction levels, from individual classes to entire service components operating within a hardware and network infrastructure context.
The Git versioning system works at the code level and extends the capabilities of the modular network of code. The distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers who are collaboratively developing source code during software development. I’ts goals include speed, data integrity, and support for distributed, non-linear workflows (thousands of parallel branches running on different computers).