• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Supporting Little Tech is the Practical Way to Deal with Big Tech

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
I am going to keep beating this drum until people change their minds about SB 1047. I realize this little corner of the internet has limited reach. But I need people to stop appealing to authority regarding whether or not it would have a chilling effect. As someone seriously contemplating freelance (i.e. not a billion dollar lab) research into Artificial Intelligence, and seeing this as the best way to just be okay. This bill is almost a death sentence.

If I were polled about a bill that people call "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" and said verbally that this would only affect people training with over $100 million. Even if I read it without thinking through weather or not the verbiage itself is implementable, unambiguous or could possibly affect me, I would have said I support it. Because I do support safe and secure innovation for AI. I am pro-regulating AI. I am anti-this bill (as is).

This affects more than just training.

Section 3, (f) (4) of SB 1047 defines:
(4) A copy of a covered model that has been combined with other software.
as a "covered derivative model"

Note that section 3 (i) defines:
(i) “Developer” means a person that performs the initial training of a covered model either by training a model using a sufficient quantity of computing power, or by fine-tuning an existing covered model using sufficient quantity of computing power pursuant to subdivision (e).

So, clearly they are using computing power to define a crucial aspect of what is covered and not money. See the graph I posted earlier. The defined limit is basically where we are now.

What it requires of developers to do is in subsection (l):
(l) “Full shutdown” means the cessation of operation of any of the following:
(1) The training of a covered model.
(2) A covered model.
(3) All covered model derivatives controlled by a developer.

Seriously, they require full shutdown of all covered models controlled by the developer?

The most common way people will be using covered derivative model is through some cloud provider. If one of the people using the same covered derived model uses it to defame a celebrity that people find cost more than $500M, the developer would have to shut down the model used for medical care automation?

Media figures breaking innovation and then complaining that innovation isn't evenly distributed would be right in line with the new kernels of anti-prosperity, anti-growth, anti-tech sentiment.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
People have been writing as if the amendments to SB 1047 made on Monday, August 19th or yesterday (August 22nd) assuage the fears that I have been talking about.

I once again implore people to read the Bill Text, instead of relying on commentary about what the bill says.

There have been some improvements, and some regressions. Ultimately, this bill remains regulatory capture of software by "Effective Altruists." (EAs) (EA is the utilitarian philosophy of rationalized subjugation followed by the likes of Sam Bankman-Fried)

Read the following definition in 22602 (b) and tell me that by "Artificial Intelligence," they don't identically mean "Software":
(b) “Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

The fundamental issue is that the bill aims to regulate models and not deployments. Trying to fix things may just be too hard.


Again, I am just someone who reads English who needs to be aware of this regulation. I also want to say, it sucks that as just a freelancer, that I have to aware of this bill. --Chilling Effect : Wasting time on being aware of this, instead of talking to potential clients and building solutions for them.

Not allowing people to, at this stage, experiment with abandon (when much of it is yet to be deployed) comes mainly come from bad analogies, and ultimately a Pascal's Mugging of the riches of the industry by the EAs (Anthropic in particular, but a lot of the Big Incumbent Safety labs - with the "Effective Altruism" tradition).

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The language around safety incidents now will affect a lot more people. It does not seem carefully thought out when to use models or covered model derivatives. This is even closer to defining thought crime.
(c) “Artificial intelligence safety incident” means an incident that demonstrably increases the risk of a critical harm occurring by means of any of the following:
(1) A covered model or covered model derivative autonomously engaging in behavior other than at the request of a user.
(2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model. model or covered model derivative.
(3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model. model or covered model derivative.
(4) Unauthorized use of a covered model or covered model derivative to cause or materially enable critical harm
*Strikethroughs are removed text, italics are added. You can see the text comparison by comparing the bill to the 07/03/2024 version.

I talked about how you can run a copy of whatever the equivalent of LLama 3.1 is at the end of next year for about $33/hour. There is so much you could do with that, which now instead you will have to hire a lawyer to comply with SB 1047. This means what many estimate $0.5M is legal fees just to start.

Copying an open source model is something even a freelancer can do. Can you with a straight face, tell me honestly that the above changes weren't written specifically for Big Tech to win? For the same reason that Voter IDs disenfranchise people from voting, these sorts of things disenfranchise people from feeding their family.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I think as a response to complaints showing very plainly that the bill is working on compute limits and not cost, some language around cost has been added in many places.
(ii) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations. operations, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.
Tell me, does this read like a it is actually a money limit, or is it just that somebody said that this implies a money limit (which it clearly wont have in 10 years if Moore's Law continues). This clearly locks-in the current Cloud-Barrons of Amazon(Anthropic), Microsoft(OpenAI), and Google.

Using, Linode, Vultr, or on-premise compute can often be cheaper (and mode secure). But the cost savings won't matter in terms of having to hire a team of lawyers, because it won't bring you under the cost limit.

Note also that it says $10M dollars, which is easily a Series B amount of money. So it clearly affects startups. This is just for fine tuning. But again, copying is something even an individual of modest means can do.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

They replaced the Frontier Model Division with Government Operations Agency. I am assuming it is this one. So instead of new agency, we use an existing one. I suppose that could be a good thing? IDK. This was one of the things Anthropic pushed back on. I am wondering if this was just theater. (Because why?) That way they can say they pushed back and now they accept it?

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Further, here are the rules for computing clusters:

22604.​

(a) A person that operates a computing cluster shall implement written policies and procedures to do all of the following when a customer utilizes compute resources that would be sufficient to train a covered model:
(1) Obtain the prospective customer’s basic identifying information and business purpose for utilizing the computing cluster, including all of the following:
(A) The identity of the prospective customer.
(B) The means and source of payment, including any associated financial institution, credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier.
(C) The email address and telephonic contact information used to verify the prospective customer’s identity.
(2) Assess whether the prospective customer intends to utilize the computing cluster to train a covered model.
(3) If a customer repeatedly utilizes computer resources that would be sufficient to train a covered model, validate the information initially collected pursuant to paragraph (1) and conduct the assessment required pursuant to paragraph (2) prior to each utilization.
(4) Retain a customer’s Internet Protocol addresses used for access or administration and the date and time of each access or administrative action.
(5) Maintain for seven years and provide to the Attorney General, upon request, appropriate records of actions taken under this section, including policies and procedures put into effect.
(6) Implement the capability to promptly enact a full shutdown of any resources being used to train or operate models under the customer’s control.
(b) A person that operates a computing cluster shall consider industry best practices and applicable guidance from the U.S. Artificial Intelligence Safety Institute, National Institute of Standards and Technology, and other reputable standard-setting organizations.
(c) In complying with the requirements of this section, a person that operates a computing cluster may impose reasonable requirements on customers to prevent the collection or retention of personal information that the person that operates a computing cluster would not otherwise collect or retain, including a requirement that a corporate customer submit corporate contact information rather than information that would identify a specific individual.

Here are punishment:

22606.​

(a) The Attorney General may bring a civil action for a violation of this chapter and to recover all of the following:
(1) For a violation that causes death or bodily harm to another human, harm to property, theft or misappropriation of property, or that constitutes an imminent risk or threat to public safety that occurs on or after January 1, 2026, a civil penalty in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation.
(2) For a violation of Section 22607 that would constitute a violation of the Labor Code, a civil penalty specified in subdivision (f) of Section 1102.5 of the Labor Code.
(3) For a person that operates a computing cluster for a violation of Section 22604, for an auditor for a violation of paragraph (6) of subdivision (e) of Section 22603, or for an auditor who intentionally or with reckless disregard violates a provision of subdivision (e) of Section 22603 other than paragraph (6) or regulations issued by the Government Operations Agency pursuant to Section 11547.6 of the Government Code, a civil penalty in an amount not exceeding fifty thousand dollars ($50,000) for a first violation of Section 22604, not exceeding one hundred thousand dollars ($100,000) for any subsequent violation, and not exceeding ten million dollars ($10,000,000) in the aggregate for related violations.
(4) Injunctive or declaratory relief.
(5) (A) Monetary damages.
(B) Punitive damages pursuant to subdivision (a) of Section 3294 of the Civil Code.
(6) Attorney’s fees and costs.
(7) Any other relief that the court deems appropriate.
(b) In determining whether the developer exercised reasonable care as required in Section 22603, all of the following considerations are relevant but not conclusive:
(1) The quality of a developer’s safety and security protocol.
(2) The extent to which the developer faithfully implemented and followed its safety and security protocol.
(3) Whether, in quality and implementation, the developer’s safety and security protocol was inferior, comparable, or superior to those of developers of comparably powerful models.
(4) The quality and rigor of the developer’s investigation, documentation, evaluation, and management of risks of critical harm posed by its model.
(c) (1) A provision within a contract or agreement that seeks to waive, preclude, or burden the enforcement of a liability arising from a violation of this chapter, or to shift that liability to any person or entity in exchange for their use or access of, or right to use or access, a developer’s products or services, including by means of a contract of adhesion, is void as a matter of public policy.
(2) A court shall disregard corporate formalities and impose joint and several liability on affiliated entities for purposes of effectuating the intent of this section to the maximum extent allowed by law if the court concludes that both of the following are true:
(A) The affiliated entities, in the development of the corporate structure among the affiliated entities, took steps to purposely and unreasonably limit or avoid liability.
(B) As the result of the steps described in subparagraph (A), the corporate structure of the developer or affiliated entities would frustrate recovery of penalties, damages, or injunctive relief under this section.
(d) Penalties collected pursuant to this section by the Attorney General shall be deposited into the Public Rights Law Enforcement Special Fund established pursuant to Section 12530 of the Government Code.
(e) This section does not limit the application of other laws.

But here is the worst part (the definition of a computing cluster):
(d) “Computing cluster” means a set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 integer or floating-point operations per second and can be used for training artificial intelligence.

Note, a compute limit not a monetary limit, again! People claiming that it just affects big moneyed companies in their commentary are clearly lying. A seed stage startup could easily be affected, and would raise their costs by 50% (for lawyers for compliance).

Right now 10^20 FLOPS would be $1M (at $0.01 per 10^9 FLOPS). Next year, we'd expect $0.5M, in 2026 $250K, etc. One could expect, by the end of the decade, being able to purchase the equivalent on a loan with similar terms to car.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
There are some (pseudo)positive steps:
The word "materially" shows up in a lot places. Hopefully, those are steps away from thought crime. I assume "materially" means doing something more than thinking about things with the aid of a computer. However, the "materially" is often counteracted by "may" or "enable", bringing us back to thought crime territory.

They expanded the Board of Frontier Models to have more people, and not have conflicts of interest which I think is a good thing. But the open-source community representation is quite wanting. There are also, in Sec 4, 11547.6 (e) (1) (b) the use of the phrase "and government entities, including from the open-source community." If this isn't signalling military-industrial complex capture of open source, it is a very unfortunate mistake.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finally, there was something mainly positive in the amendments to the bill.
(i) “Developer” means a person that performs the initial training of a covered model either by training a model using a sufficient quantity of computing power, power and cost, or by fine-tuning an existing covered model using sufficient or covered model derivative using a quantity of computing power pursuant to and cost greater than the amount specified in subdivision (e).

It could be a little clearer still, but I'll take it.

If only the other part talking about limits could be similarly clearly talking about costs (ideally even more clearly).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
It makes a fundamental mistake of trying to regulate AI technology rather than address harmful applications. Worse, by making it harder for developers to release open AI models, it will hamper researchers’ ability to study cutting-edge AI and spot problems, and therefore will make AI less safe.” - Andrew Ng

Less safe.

That's the effect of the bill. The source comes from a fundamental misconstrual of systems and models.

Most of even the more stringent rules, even before events happen make sense for planned applications (before deployment).

We don't have regulations against people making drawings or computer simulations of bridges before there are even cogent plans for the bridge.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731

I'm incredibly conflicted. When the likes of Andrew Ng, Yann LeCunn, and Fei Fei Li can't convince law makers to rethink their approach to AI safety - I can only imagine that the forces wanting military-industrial control of AI are winning. Open source AI will be dealt a huge blow. LLama 4 (or whatever other open source models) will have to come in just under the training FLOPS limit, so that independent AI researchers can do uncaptured AI safety research.

If AGI and ASI do actually emerge soon, we can point to this legislation as the cornerstone of subjugation of the human race.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
When I post about breaking up big tech or supporting little tech, it is because the tech sector is what I know.

But I know, from conversations, that the same thing is happening in many sectors of the economy: Oligopolies are creating hype and fear so that laws and regulations are passed that directly affect potential start-ups.

The small players, just trying to survive or grow, get locked out, and their pleas fall on deaf ears.

Making sensible regulations to protect people is vital. But people playing "tilt-the-playing-field" in the middle of that process force everyone to do it. See Bulshit Jobs (Goons).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
I want to urge people to write your law makers to tell Governor Newsom to veto SB 1047.

Actually read the text of the bill.

Above, I've already showed, quite directly(with numbers you could put in a business plan) how it'll put hurdles up for small open source developers. I quoted the text of the bill that put up the hurdles. Most of it is still there as it passed the assembly.

This bill intentionally pushes frontier models towards closed military-industrial controlled AI, and will make AI less safe, create a path-dependence to AI safety not based on empiricism but guess work.

The detractors of this bill includes people like Andrew Ng, perhaps the person who has educated more people in the world about AI than anyone. He's behind courses like AI for Everyone.

Two big detractors are also "God parents of AI", Yann LeCun and Fei Fei Li. Fei Fei Li runs the Institute for Human Centered AI.

The characterization by Vox writers that the detractors are billionaires opposing "sensible legislation" brings their whole credibility into question.

Here is some information on the financial backing of Vox:

SB 1047 is clearly anti-litle tech. It's been nerfed a lot, but it's wrong-headed from the start. Don't regulate equations (models), regulate applications (systems).

Placing undue regulatory barriers to simulate bridges would make bridges less safe not more. The same is true for AI.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
Here is a cost break breakdown that SB 1047 adds. Please read it and give honest feedback about whether you think this is oppressive for people.

I'm trying to be reasonably frugal in terms of methods.

Here’s a comprehensive breakdown of the total costs for conducting AI Safety research on a new open-source model under the assumption of a single individual operator using NVIDIA H100 GPUs on Vultr, while complying with the new AI safety bill (SB 1047).

1. Technical and Operational Costs​

  • Compute Resources:
  • Hourly Cost: $32.20 per hour for 14 GPUs.
  • Daily Cost: $772.80 (assuming 24 hours of operation per day).
  • Monthly Cost: $23,184 (assuming 30 days of continuous operation).
  • Storage Costs:
  • 1 TB SSD Storage: $100 per month.
  • Data Transfer Costs:
  • Estimated Cost: $50 per month.
  • Additional Cloud Infrastructure Costs:
  • Extra Cloud Services: $200 per month.
  • Operational Tools and Automation:
  • Automation Tools: $0 to $50 per month.
  • Total Monthly Technical and Operational Cost: [ 23,184 + 100 + 50 + 200 + 50 = $23,584 per month ]

2. Compliance Costs​

Given the requirements set by SB 1047, the compliance costs are broken down as follows:
  • Safety and Security Protocol Implementation:
  • Initial Cost: $50,000 to $150,000.
  • Annual Cost: $10,000 to $30,000.
  • Third-Party Audits:
  • Annual Cost: $50,000 to $100,000.
  • Legal and Administrative Compliance:
  • Annual Cost: $25,000 to $75,000.
  • Personnel Costs:
  • Initial and Annual Cost: $100,000 to $200,000.
  • Shutdown Capabilities:
  • Initial Cost: $20,000 to $50,000.
  • Initial and Annual Cost of operation: $5,000 to $10,000.
  • Whistleblower Protections and Internal Processes:
  • Annual Cost: $10,000 to $30,000.
  • Total Compliance Costs:
  • Initial Costs: $255,000 to $605,000.
  • Ongoing Annual Costs: $195,000 to $445,000.

Combined Cost Summary​

  • Initial Costs (Compliance and Setup):
  • Total Initial Cost: $255,000 to $605,000.
  • Ongoing Monthly Costs:
  • Technical and Operational Costs: $23,584 per month.
  • Ongoing Annual Compliance Costs: $195,000 to $445,000 annually (approximately $16,250 to $37,083 per month).

Total Estimated Monthly Cost Including Compliance:​

  • Total Monthly Cost (including a pro-rated monthly compliance cost): [ 23,584 + 16,250 to 37,083 = $39,834 to $60,667 per month ]

This combined cost estimate includes all operational, technical, and compliance-related expenses needed to conduct AI Safety research under the new regulations.

We seem to have to double to octuple your costs as an open-source developer starting a new project.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
SB 1047 will put open source developers, even academics into the Grey zone with these bad actors:

The broad-brush media still doesn't distinguish equations(models) from systems(applications).

You could be cracking down on malicious actors with applications focused regulations, but SB 1047 is pretty much guaranteed to grab some academic research groups instead.

I quoted the text and walked through the problematic language earlier.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
We know that cloud initially opened opportunities for people, and then closed them.

It used to be people who wanted to learn could simply experiment and pick up the needed skills. But overtime, all the opportunities to do that at reasonable costs (often free) got bought out by incumbents.

This produced a dependence (desperation really) to get into a "FAANG+" company to learn cloud skills in the first place. Learning the alternatives (Digital Ocean, Vultr, Linode) pretty much made you a pariah.

This same mechanic is now in full force in the AI race. The real AI race is between ubiquity and capabilities.

If ubiquity loses, so do workers, and so does humanity. CA SB 1047 is the anti-opportunty economy bill. It consulted only the incumbent safety labs (all EA funded-only some waking up to the true danger EA as a philosophy creates).

The knowledgeable technical people who are encouraging Newsom to sign the bill are likely all "Effective Altruists" (coming from the philosophical lineage of SBF). The general public support, technical or not, may not have read and thought about the bill closely. If I hadn't, I would have naively supported the bill too.

It's important to regulate applications(systems) not equations(models). Making it more burdensome to simulate bridges effectively makes bridge building less safe. That's why many practicioners not beholden to EA, like Andrew Ng, Fei- Fei Li as well as most independent AI practitioners oppose this bill. Read the bill, not the commentary. It effectively makes AI less safe. Every piece of legislation built with this as a foundation has to be strongly opposed to keep it from being a thought-crime bill. There's plenty of other sensible AI safety bills that should not be poisoned with this EA nonsense dressed up as rationalized "common sense."

"Effective Altruism" is philosophy of rationalized subjugation(I was drawn in initially myself). In fact, utilitarian philosophy, more generally, is the dressing up of future scenarios with "credences" that reflect nothing more than a person's biases.

There's a place for qualitative reasoning, especially in cases where there's extreme uncertainty. You can be quantitative (utility functions, etc.), but not take it so seriously, because there'll be wild swings in credences and scenarios as time goes on.

I feel obligated to play the sci-fi game of imagining the end of humanity through AI to counter the Pascal Mugging that EAs are pulling off right now. The end of humanity is clearly much more likely if only a select few people understand AI before AGI and ASI happens. The AIs will have a much easier time social engineering a smaller group of AI researchers. When people talk about "the first encounter with AI"(social media-though it's not even close to the first encounter) causing ills, realize it came in the closed-propietary-opaque from.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
What possible reason does SAG-AFTRA and NOW have to support CA SB 1047?

The following bills seem far less problematic:


In fact, they seem exactly counter to SB 1047. SB 947 in particular, which I would think SAG-AFTRA would want, and not be contradicred by SB 1047.

Ultimately, I think it's about coalition building. Who cares if it disenfranchised people from their livelihoods right? Tech people aren't people, right?
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
The median (not average) and typical Startup founders are closer to homeless people than a billionaire.

The fact that celebrities don't get that maybe the root of the issue.

You would think that they would have sympathy that this is similar to a stand-ups, musicians, etc. paying their dues before making it (where most don't) similar to what they had to do before fame.

I've been complaining about this vein of Authoritarian left forming for some time now.

Don't worry, I am still voting Democrat nationally (president/congress).

CA SB 1047 has been nerfed enough that I may just have risk getting sued by the CA AG to being able to keep feeding my family. It's not like I have a choice.

But just like people (including me) were sounding alarm bells about the Fascist streaks in the Republican party in 2015, and people sounding the alarm bells about housing costs during the pandemic, people are sounding the alarm bells about the rapid rise of the Authoritarian Left (including me).

Fascism is clearly the greater threat.

But to say the Moaism, Stalinism isn't real and something similar couldn't happen here is similar complacency to what Republicans had. We need to beat fascism this election, but immediately turn attention to beat away the Authoritarian Left influence. It snuck in fast because we're so focused on beating back fascism.

What happened in Cuba, Venezuela, East Germany, the Soviet Union, and North Korea can happen here too.

Water is wet. So is milk.

Harris will hopefully get a huge bump after the debate.

But for all the pundits wondering why we still come back to a statical dead heat--fear of the Authoritarian Left is why.

Unfortunately, celebrities have a long track record of endorsing Left Wing despots--that maybe Trump's draw to them too.

Remember Hugo Chavez, and the number of celebrities enamored with him?
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
A ridiculously, in your face example of sh*tting on tech workers while using the fruits of their labor:

People are now perfectly happy using no-code, low-code, or AI generated code while simultaneously protesting the use of AI generated art or voice.

Tech people are people right? Tech workers aren't workers right? You're perfectly okay with no code, low code and AI generated code taking work from coders?
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
We talked about NIMBY (Not In My Backyard) leading to a BANANA(Build Almost Nothing Anywhere Near Anyone) situation.

This led to the accute housing crisis in California in particular. Why I bring this up is because of the well intentioned(on the surface) CEQA law.


Anyone can sue anyone about anything, but CEQA allowed recruiting the California government for this purpose.

Now a similarly well intentioned (on the surface) law, SB 1047 will create a CEQA for software.

The BANANA situation that people will weaponize SB 1047 for software will be "Build Almost Nothing Anywhere Near Automation."

IOW, people will weaponize the CA AG using SB 1047 to put hurdles in front of anyone building any automation in California.

IOW, any software, robotics, or manufacturing improvements.

If Kamala Harris is serious about the "Opportunity Economy," she can impress upon Newsom to veto it. Recruit Nancy Pelosi's help, she's already come out against it.

Otherwise, they'll hand a largely winning talking point to Trump-Vance.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,731
A ridiculously, in your face example of sh*tting on tech workers while using the fruits of their labor:

People are now perfectly happy using no-code, low-code, or AI generated code while simultaneously protesting the use of AI generated art or voice.

Tech people are people right? Tech workers aren't workers right? You're perfectly okay with no code, low code and AI generated code taking work from coders?
I'm not telling people to avoid AI use in any area - quite the opposite.

I was pointing out the deep contradiction(especially among gaming communities and media companies) of using no-code, low code, and generated code which you would have needed software engineers for in the past, while disparaging the use of AI generated writing, art, music, and voice.

Different people have different skills and differing resources.

If a poor person somewhere wanted to use AI for everything to create a game that gets them the ability to house and feed their families, why disparage them.

How many community theaters, dance studios, etc. make use of copyrighted works in their businesses? How many do you think get rights or permissions?
 
Top