LupoToro

View Original

Mandatory Information Sharing - Forced by AI

Microsoft Window’s computers are now mandating all new accounts setup grant their backend AI (artificial intelligence, which like all AI research and development, is growing at a rate that far exceeds traditional technology measuring methods, such as Moores Law, at the time of writing). It unclear when this mandatory change was implemented, but is Windows-wide, meaning that no matter if you have an existing PC or new PC, updated Terms of Service require mandatory AI access.

Whilst there is still an option to setup a new account in ‘offline mode’ (i.e. when setting up a new PC, opting to setup without connecting to the internet initially), agreeing to Windows updates or reconnecting the system to the internet, will involve an agreement of terms, in which AI is granted full access.

This update to service terms is not specific to Microsoft, as Apple, Amazon, Google, Huawei, Samsung, and all other major technology providers and producers, involved in AI, including similar terms.

Big tech AI generates the output it does because it is precisely executing the specific ideological, radical, biased agenda of its creators, or the data that it draws from (source materials curated or specified as input). Any noticed bizarre output is internally, at big tech firms, intended, despite the general media reporting and initial surface level public evaluation noting bizarre output as ‘errors’. It is working as designed, according to Elon Musk, who is developing ‘Grok’ AI, and notably, is the only major AI developer not mandating total access to a users’ end systems (phone, computer, devices and stored information).

So, where does your information go, once it is granted access by an overshadowing AI, built into your device? To date, no big tech provider is able to tell you, and they have no reason to tell you, either, as global government legislation does not exist for AI and AI collected information. Human-to-human data collection is one thing, and highly regulated in most nations globally; a big tech firm needs to specify how information is collection, how it is stored, what it will be used for, and why they need it in the first place.

But that is human-based data collection, collected through algorithms, not AI, and certainly not sentient AI (which, at the time of writing, does not exist yet). Legally speaking, a highly advanced AI (such as Microsoft’s OpenAI) can collect, use, and store all your personal information on your device, but not adhere to standardised data collection laws, because the AI is acting on its own; there is a technicality here in which big tech skirts on.

An AI cannot be sued, it is self-sufficient, despite big tech having access to the back-end and all that is there. The technicality that big tech use to get around total transparent reporting here is based on the fact that AIs currently use large models (i.e. data sets) to provide information, produce materials (such as images and videos generated), with millions or billions of data points that it pulls from, at any moment, and therefore curation of this manually (humanly) is impossible; the AI is somewhat self-sufficient, and operating on its (coded) own terms. Therefore, trying to manage everything it is doing is not as easy as traditional data farming. No, we are not at the point it is sentient, but at that point, the problem only escalates. Precedent shows that lawmakers are decades (if not more) behind technology and human right values in regard to lawmaking, and it takes a long time for laws to pass.

The current solution is for big tech not to mandate AI use in data collection and device access, but this is not happening. Until there is legal pressure and laws passed for big tech to actually apply the same data collection laws to AI managed access and content, the ambiguity will remain, and grow.