Back to News
Advertisement
Advertisement

Discussion (149 Comments)

paxysabout 22 hours ago
This spiel is hilarious in the context of the product this company (https://juno-labs.com/) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.

“Oh but they only run on local hardware…”

Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.

Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

Have all your guests consented to this?

What happens when someone breaks in and steals the box?

What if the government wants to take a look at the data in there and serves a warrant?

What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?

zmmmmmabout 21 hours ago
The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It's a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.

I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.

ajuhaszabout 21 hours ago
Agreed, while we've tried to think through this and build in protections we can't pretend that there is a magical perfect solution. We do have strong conviction that doing this inside the walls of your home is much safer than doing it within any companies datacenter (I accept that some just don't want this to exist period and we won't be able to appease them).

Some of our decisions in this direction:

  - Minimize how long we have "raw data" in memory
  - Tune the memory extraction to be very discriminating and err on the side of forgetting (https://juno-labs.com/blogs/building-memory-for-an-always-on-ai-that-listens-to-your-kitchen)
  - Encrypt storage with hardware protected keys (we're building on top of the Nvidia Jetson SOM)
We're always open to criticism on how to improve our implementation around this.
bossyTeacherabout 6 hours ago
> - Minimize how long we have "raw data" in memory

I believe you should allow people to set how long the raw data should be stored as well as dead man switches.

HWR_14about 17 hours ago
> Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it,

In the US you it is not legal to be compelled to turn over a password. It's a violation of your fifth amendment rights. In the UK you can be jailed until you turn over the password.

eelabout 5 hours ago
At Amazon, their travel trainings always recommended giving out your laptop password if asked by law enforcement or immigration, regardless of whether it was legal in the jurisdiction. Then you were to report the incident as soon as possible afterwards, and you'd have to change your password and possibly get your laptop replaced.

That kind of policy makes sense for the employee's safety, but it definitely had me thinking how they might approach other tradeoffs. What if the Department of Justice wants you to hand over some customer data that you can legally refuse, but you are simultaneously negotiating a multi-billion dollar cloud hosting deal with the same Department of Justice? What tradeoff does the company make? Totally hypothetical situation, of course.

rrr_oh_manabout 7 hours ago
There’s an interesting loophole for Face ID…
SpicyLemonZestabout 3 hours ago
There are many jurisdictions in the US (not all!) where you can't be compelled to turn over a password in a criminal case that's specifically against yourself. But that's a narrow exception to the general principle that a court can order you to give them whatever information they'd like.
Sharlinabout 3 hours ago
> nothing you can do can prevent the information being handed over

I'm being a bit flippant here, but thermite typically works fine.

drdaemanabout 16 hours ago
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

Is this somehow fundamentally different from having memories?

Because I thought about it, and decided that personally I do - with one important condition, though. I do because my memories are not as great as I would like them to be, and they decline with stress and age. If a machine can supplement that in the same way my glasses supplement my vision, or my friend's hearing aid supplements his hearing - that'd be nice. That's why we have technology in the first place, to improve our lives, right?

But, as I said, there is an important condition. Today, what's in my head stays in there, and is only directly available to me. The machine-assisted memory aid must provide the same guarantees. If any information leaves the device without my direct instruction - that's a hard "no". If someone with physical access to the device can extract the information without a lot of effort - that's also a hard "no". If someone can too easily impersonate myself to the device and improperly gain access - that's another "no". Maybe there are a few more criteria, but I hope you got the overall idea.

If a product passes those criteria, then it - by design - cannot violate others' privacy - no more than I can do myself. And then - yeah - I want it, wish there'd be something like that.

dbtcabout 15 hours ago
This will not augment memory the way glasses do for sight, this will replace memory the way a wheelchair replaces legs.
estimator7292about 3 hours ago
So do you think disabled people deserve to participate in society or not?
shevy-javaabout 14 hours ago
Memories are usually private. People can make them public via a blog.

AI feels more like an organized sniffing tool here.

> If a product passes those criteria, then it - by design - cannot violate others' privacy

A product can most assuredly violate privacy. Just look how Facebook gathered offline data to interconnect people to reallife data points, without their consent - and without them knowing. That's why I call it Spybook.

Ever since the USA became hostile to Canadians and Europeans this has also become much easier to deal with anyway - no more data is to be given to US companies.

drdaemanabout 14 hours ago
> AI feels more like an organized sniffing tool here.

"AI" on its own is an almost meaningless word, because all it tells is that there's something involving machine learning. This alone doesn't have any implied privacy properties, the devil is always in the untold details.

But, yeah, sure, given the current trends I don't think this device will be privacy-respecting, not to say truly private.

> A product can most assuredly violate privacy.

That depends on the design and implementation.

encomabout 15 hours ago
>That's why we have technology in the first place, to improve our lives, right?

No, we have technology to show you more and more ads, sell you more and more useless crap, and push your opinions on Important Matters toward the state approved ones.

Of course indoor plumbing, farming, metallurgy and printing were great hits, but technology has had a bit of a dry spell lately.

If "An always-on AI that listens to your household" doesn't make you recoil in horror, you need to pause and rethink your life.

schrodingerabout 2 hours ago
I don't think that ads _have_ to be evil.

When I look at Google, I see a company that is fully funded by ads, but provides me a number of highly useful services that haven't really degraded over 20 years. Yes, the number of search results that are ads grew over the years, but by and large, Google search and Gmail are tools that serve rather benevolently. And if you're about to disagree with this ask yourself if you're using Gmail, and why?

Then I look at Meta or X, and I see a cesspool of content that's driven families apart and created massive societal divides.

It makes me think that Ads aren't the root of the problem, though maybe a "necessary but not sufficient" component.

lukanabout 8 hours ago
I really hope, that before I will get old and fragile, I will get my smart robotic house, with an (local!) AI assistant always listening to my wishes and then executing them.

I rather have the horror of being old and forgotten in a half care like most old people are right now. AI and robots can bring emporerment. And it is up to us, whether we let ad companies serve them to us from the cloud, or local models running in the basement.

drdaemanabout 14 hours ago
> you need to pause and rethink your life.

If you can't think of an always-on AI that listens but doesn't cause any horrors (even though its improbable to get to the market in the world we live on), I urge you to exercise your imagination. Surely, it's possible to think of an optimistic scenario?

Even more so, if you think technology is here to unconditionally screw us up no matter what. Honestly - when the world is so gloomy, seek something nice, even if a fantasy.

beepbooptheoryabout 3 hours ago
BoxFourabout 21 hours ago
It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).

The non privacy-conscious will just use Google/etc.

yndoendoabout 18 hours ago
A good example of this is what one of my family member's partner said. "Isn't creep that you just talked about something and now you are seeing ads for it. Guess we just have to accept it."

My response was no I don't get any of that because I disable that technology since it is always listening and can never be trusted. There is no privacy in those services.

They did not like that response.

dotancohenabout 16 hours ago
I used to be considered a weirdo and creep because I would answer the question of why don't I have WhatsApp with the answer "I do not accept their terms of service". Now people accept this answer.

I don't know what changed, but the general public is starting to figure out that that actually can disagree with large tech companies.

bandramiabout 11 hours ago
I want a hardware switch for the microphone. If it can hear the wake word it's already listening.
com2kidabout 20 hours ago
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

Typically not how these things work. Speech is processed using ASR (automatic speech recognition), and then ran through a prompt that checks for appropriate tools calls.

I've been meaning to basically make this myself but I've been too lazy lately to bother.

I actually want a lot more functionality from a local only AI machine, I believe the paradigm is absurdly powerful.

Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.

Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.

I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.

ramenbytesabout 19 hours ago
> Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.

> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.

> I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.

Those don't sound like things that you need AI for.

jcgrilloabout 17 hours ago
> > Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.

This would be its death sentence. Nuked from orbit:

  sudo rm -rfv /
Or maybe if there's any slower, more painful way to kill an AI then I'll do that instead. I can only promise the most horrible demise I can possibly conjure is that clanker's certain end.
dotancohenabout 16 hours ago

  > Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I push a button on the phone and then say them. I've been doing this for over twenty years. The problem is ever getting back to those voice notes.
reilly3000about 19 hours ago
It really is a prosthetic for minds that struggle to organize themselves.
allovertheworldabout 3 hours ago
Like a calendar
tzsabout 16 hours ago
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

Maybe I missed it but I didn't see anything there that said it saved conversations. It sounds like it processes them as they happen and then takes actions that it thinks will help you achieve whatever goals of your it can infer from the conversation.

SkyPuncherabout 19 hours ago
I agree. I also don't really have an ambient assistant problem. My phone is always nearby and Siri picks up wake words well (or I just hold the powerbutton).

My problem is Siri doesn't do any of this stuff well. I'd really love to just get it out of the way so someone can build it better.

ajuhaszabout 19 hours ago
Some of the more magical moments we’ve had with Juno is automatic shopping list creation saying “oh no we are out of milk and eggs” out loud without having to remember to tell Siri becomes a shopping list and event tracking around kids “Don’t forget next Thursday is early pickup”. A nice freebie is moving the wake word to the end. “What’s weather Juno today?” becomes much more natural than a prefixed wake word.
ajuhaszabout 21 hours ago
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).

Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.

throwaway5465about 19 hours ago
They seem quite honest with who they are and how they do what they do.
peytonabout 7 hours ago
I’m 99% sure this article is AI generated. Regardless, people will gravitate to the tool that solves their problems. If their problem is finding a local plumber or a restaurant they like, advertising will be involved.
toleranceabout 12 hours ago
The product that’s being implicitly advertised here is supposed to ship at the end of this year and there doesn’t even appear to be a real photo of the thing and if that’s an indicator of the quality of the product then I must assume that it is poor and the people responsible also apparently do not have the money to hire a capable web designer and I’m sorry if this is harsh or unnecessary but I never thought I would miss the generic Bootstrap or Tailwind or whatever bougie framework other companies use because boy the layout here does not elicit great expectations for their product either and I’m worried that if it ever does ship that nefarious parties will intercept all the private communications of its unfortunate owners and in an ironic sort of way their devices will become the first sort of reverse ad agent that does not transmit advertisements but receives advertisements in the form of the raw interests of their clients fed to said nefarious parties and then laundered through more traditional channels.

A man-in-the-middle-of-the-middle-man.

ajuhaszabout 3 hours ago
The first version will use small batch production techniques like 3D printing and small volume PCB manufacturing. On the photos, we thought it to be more appropriate to show a sketch vs a pretty AI generated photo that is true to anything yet but presents well.

We have some details here on how we’re doing the prototyping with some photos of the current prototype: https://juno-labs.com/blogs/how-we-validate-our-custom-ai-ha...

toleranceabout 3 hours ago
Well. Color me convinced a bit. I took a little time to compare where your at now to where Ring began with Doorbot. It’s not improbable that this can take off.

I’m not a product guy. Or a tech guy for that matter. Do you have any preparations in mind for Apple’s progress with AI (viz. their partnership with Google)? I don’t even know if the actual implementation would satisfy your vision with regard to everything staying local though.

Starting with an iPad for prototyping made me wonder why this didn’t begin as just an app. Or why not just ship the speaker + the app as a product.

You don’t have sketches? Like ballpoint pen on dot grid paper? This is me trying to nudge you away from the impression I get that the website is largely AI-scented.

After making my initial remarks (a purposely absurd one that I was actually surprised got upvoted at all), I checked your resume and felt a disconnect between your qualifications and the legitimate doubt I described in my comment.

To be honest my impression was mostly led by the contents of the website itself, speculation about the quality/reliability of the actual product followed.

I don’t want to criticize you and your decisions in that direction but if this ambition is legitimate it deserves better presentation.

Do you have any human beings involved in communicating your vision?

JumpCrisscrossabout 10 hours ago
> is supposed to ship at the end of this year and there doesn’t even appear to be a real photo

Given they're "still finalizing the design and materials" and are not based in China, I think it's a safe bet that the first run will either be delayed or be an alpha.

tempodoxabout 9 hours ago
In addition to being vaporware, it’s presumably vibecoded slop, so: vaporslop.
thundergolferabout 20 hours ago
I agree with the core premise that the big AI companies are fundamentally driven towards advertising revenue and other antagonistic but profit-generating functionality.

Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.

I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].

My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.

Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.

Interesting times ahead.

1. https://en.wikipedia.org/wiki/The_Entire_History_of_You 2. https://en.wikipedia.org/wiki/The_Truth_of_Fact,_the_Truth_o...

0xbadcafebeeabout 14 hours ago
> The always-on future is inevitable

Not if you use open source. Not if you pay for services contractually will not mine your data. Not if you support start-ups that commit to privacy and the banning of ads.

I said on another thread recently that we need to kill Android, that we need a new Mobile Linux that gives us total control over what our devices do, our software does. Not controlled by a corporation. Not with some bizarre "store" that floods us with millions of malware-ridden apps, yet bans perfectly valid ones. We have to take control of our own destiny, not keep handing it over to someone else for convenience's sake. And it doesn't end at mobile. We need to find, and support, the companies that are actually ethical. And we need to stop using services that are conveniently free.

Vote with your dollars.

bigyabaiabout 2 hours ago
We have mobile Linux. It's only supported on less than a dozen handsets and runs like shit, but we have it already.

The reason nobody uses mobile Linux is that it has to compete with AOSP-derived OSes like LineageOS and GrapheneOS, which don't suck or run like shit. This is what it looks like when people vote with their dollars, people want the status-quo we have (despite the horrible economic damages).

13pixelsabout 5 hours ago
The explicit ads angle is only half the story. Even without paid placements, these models already have implicit recommendations baked in.

We ran queries across ChatGPT, Claude, and Perplexity asking for product recommendations in ~30 B2B categories. The overlap between what each model recommends is surprisingly low -- around 40% agreement on the top 5 picks for any given category. And the correlation with Google search rankings? About 0.08.

So we already have a world where which CRM or analytics tool gets recommended depends on which model someone happens to ask, and nobody -- not the models, not the brands, not the users -- has any transparency into why. That's arguably more dangerous than explicit ads, because at least with ads you know you're being sold to.

ACCount37about 4 hours ago
What you're saying is "different LLMs recommend different things".

Replace "LLMs" with "random schmucks online" and what changes exactly?

jayd16about 3 hours ago
No one is arguing to replace everything with random schmucks.
ACCount37about 2 hours ago
Why would one argue for the status quo?
5o1ecistabout 10 hours ago
> They’re building a pocket-sized, screenless device with built-in cameras and microphones — “contextually aware,” designed to replace your phone.

"Contextually aware" means "complete surveillance".

Too many people speak of ads, not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner.

Instead, MY FELLOW HUMANS are, or will be, programmed to accept and want their own little "Big Brother's little brother" in their pocket, because it's usefull and or makes them feel safe and happy.

JumpCrisscrossabout 9 hours ago
> not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner

Everyone online is constantly talking about it. The truth is for most people it's fine.

Some folks are upset by it. But we by and large tend to just solve the problem at the smallest possible scale and then mollify ourselbves with whining. (I don't have social media. I don't have cameras in or around my home. I've worked on privacy legislation, but honestly nobody called their representatives and so nothing much happened. I no longer really bring up privacy issues when I speak to my electeds because I haven't seen evidence that nihilism has passed.)

5o1ecistabout 9 hours ago
There are many things wrong with your post and I'm not convinced that there is a point in attempting explaining it to you, MY FELLOW HUMAN.

I'll let you decide.

Thank you.

JumpCrisscrossabout 8 hours ago
> I'm not convinced that there is a point in attempting explaining it

That encapsulates my point.

I’ve worked on various pieces of legislation. All privately. A few made it into state and federal law. Broadly speaking, the ones that make it are the ones for which you can’t get their supporters to stop calling in on.

Privacy issues are notoriously shit at getting people to call their electeds on. The exception is when you can find traction outside tech, or if the target is directly a tech company.

alansaberabout 5 hours ago
Already here. Even without flexible but dodgy LLM automation, entities like marketing companies have had access to extreme amounts of user data for a long time.
dasil003about 14 hours ago
Maybe I'm just getting old, but I don't understand the appeal of the always-on AI assistant at all. Even leaving privacy/security issues aside, and even if it gets super smart and capable, it feels like it would have a distancing effect from my own life, and undermine my own agency in shaping it.

I'm not against AI in general, and some assistant-like functionality that functions on demand to search my digital footprint and handle necessary but annoying administrative tasks seems useful. But it feels like at some point it becomes a solution looking for a problem, and to squeeze out the last ounce of context-aware automation and efficiency you would have to outsource parts of your core mental model and situational awareness of your life. Imagine being over-scheduled like an executive who's assistant manages their calendar, but it's not a human it's a computer, and instead of it being for the purpose of maximizing the leverage of your attention as a captain of industry, it's just to maintain velocity on a personal rat race of your own making with no especially wide impact, even on your own psyche.

rgloverabout 3 hours ago
I think it has very little to do with the assistant factor and more to do with the loneliness factor (at least in the West, people are getting lonelier, not less). In other words: sell it to them as a friendly companion/assistant, playing on emotions, while creating a sea of surveillance drones you can license back to the powers that be at a premium.

It's a hell of a mousetrap.

Starts playing Somewhere Over the Rainbow.

larussoabout 14 hours ago
Totally agree. Sounds some envision want some level of Downton Abbey without the humans as service personal. A footman / maid in every room or corner to handle your requests at any given moment.
alansaberabout 5 hours ago
May I refer you to WALL-E. The contention between hard vs convenient in our daily lives always seems to slowly edge towards convenient. If not in this generation, the next gen will be more willing to offload more.
kaffekakaabout 14 hours ago
Agree.

No matter how useful AI is and will become - I use AI daily, it is an amazing technology - so much of the discourse is indeed a solution looking for a problem. I have colleagues suggesting on exactly everything "can we put an MCP in it" and they don't even know what the point of MCP is!

fragmedeabout 14 hours ago
It's the rat race. I gotta get my cheese, and fuck you, because you getting cheese means I go hungry. The kindergarden lesson on sharing got replaced by a lesson on intellectual property. Copyright, trademark, patents, and you.

Or we could opt out, and help everyone get ahead, on the rising tide lifts all boats theory, but from what I've seen, the trickle of trickle down economics is urine.

BoxFourabout 22 hours ago
This strikes me as a pretty weak rationalization for "safe" always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.

Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.

Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.

ajuhaszabout 22 hours ago
We give an overview of our the current memory architecture at https://juno-labs.com/blogs/building-memory-for-an-always-on...

This is something we call out under the "What we got wrong" section. We're currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.

> The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.

com2kidabout 20 hours ago
Heyas! Glad to see someone making this

I wrote a blog post about this exact product space a year ago. https://meanderingthoughts.hashnode.dev/lets-do-some-actual-...

I hope y'all succeed! The potential use cases for locally hosted AI dwarf what can be done with SaSS.

I hope the memory crisis isn't hurting you too badly.

ajuhaszabout 20 hours ago
Yes! We see a lot of the same things that really should have been solved by the first wave of assistants. Your _Around The House_ reads similar to a lot of our goals though we would love the system to be much more pro-active than current assistants.

Feel free to reach out. Would love to swap notes and send you a prototype.

> I hope the memory crisis isn't hurting you too badly.

Oh man, we've had to really track our bill of materials (BOM) and average selling price (ASP) estimates to make sure everything stays feasible. Thankfully these models quantize well and the size-to-intelligence frontier is moving out all the time.

luxuryballsabout 17 hours ago
I wonder if the answer is that it is stored and processes in a way that a human can’t access or read, like somehow it’s encrypted and unreadable but tokenized and can be processed, I don’t know how but it feels possible.
krupanabout 13 hours ago
It wouldn't matter of you did all that because you could still ask the AI, "what would my friend Bob think about this?" And the AI, who heard Bob talking in his phone when he thought he was alone in the other room, could tell you.
luxuryballsabout 5 hours ago
Right but that’s where the controls could be, it would just pretend to not know about Bob due to consent controls etc, but of course this would limit the usefulness.
econabout 16 hours ago
Just when you've asked if there are eggs the doorbell rings, the neighbor stands there in disbelief, it told me to bring you eggs? Give him the half bottle vodka, it's going to expire soon and his son will make a surprise visit tonight. An argument arises and it participates by encouraging both parties with extra talking points.

But this was only the beginning, after gathering a few TB worth of micro expressions it starts to complete sentences so successfully the conversation gradually dies out.

After a few days of silence... Narrator mode activated....

fwipsyabout 15 hours ago
I'm invested in this scenario now, you should write a short story.
walterbellabout 14 hours ago
> after gathering a few TB worth of micro expressions it starts to complete sentences

Apple bought those for $2B.. coming to Siri.

halperabout 6 hours ago
Vodka that expires must be the epitome of enshittification!
sxpabout 22 hours ago
The article is forgetting about Anthropic which currently has the best agentic programmer and was the backbone for the recent OpenClaw assistants.
gpmabout 15 hours ago
Also Mistral, which is definitely building AI assistants even if they aren't quite as successful so far.
ajuhaszabout 22 hours ago
True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.
popalchemistabout 20 hours ago
Both are pandoras box. Open Claw has access to your credit cards, social media accounts, etc by default (i.e. if you have them saved in your browser on the account that Open Claw runs on, which most people do.)
iugtmkbdfil834about 22 hours ago
This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.
phtrivierabout 3 hours ago
Is anthropic using ads ? Is mistral using ads ? Is déepseek using ads ?

Google, meta, and amazon, sure, of course.

It's interesting that the "every company" part is only open ai... They're now part of the "bad guys spying on you to display ads." At least it's a viable business model, maybe they can recoup capex and yearly losses in a couple decades instead of a couple centuries.

Advertisement
vivzkestrelabout 4 hours ago
- new startup idea: sound proof boxes for all your electronic devices

- put them inside the soundproof box and they cannot hear anything outside

- the box even shows the amount of time for which the device has not been able to snoop on you daily

rrr_oh_manabout 3 hours ago
I’m more and more drawn to the Enemy of the State solution.
schaeferabout 4 hours ago
I already have a refrigerator. Thanks. :)
BrenBarnabout 10 hours ago
We're getting closer to a world where every company is an ad company, period. It seems like there are more and more ads touting a dwindling number of actual products.
emsignabout 14 hours ago
First it's ads, then it's political agenda. We've seen this inconspicuous transition happen with social media and it will happen even more inconspicuously with LLMs.
shevy-javaabout 15 hours ago
> The most helpful AI will also be the most intimate technology ever built. It will hear everything. See everything

Big Brother is watching you. Who knew it would be AI ...

The author is quite right. It will be an advertisement scam. I wonder whether people will accept that though. Anyone remembers ublock origin? Google killed it on chrome. People are not going to forget that. (It still works fine on Firefox but Google bribed Firefox into submission; all that Google ad money made Firefox weak.)

Recently I had to use google search again. I was baffled at how useless it became - not just from the raw results but the whole UI - first few entries are links to useless youtube videos (also owned by Google). I don't have time to watch a video; I want the text info and extract it quickly. Using AI "summaries" is also useless - Google is just trying to waste my time compared to the "good old days". After those initial videos to youtube, I get about 6 results, three of which are to some companies writing articles so people visit their boring website. Then I get "other people searched for candy" and other useless links. I never understood why I would care what OTHER people search for when I want to search for something. Is this now group-search? Group-think 1984? And then after that, I get some more videos at youtube.

Google is clearly building a watered-down private variant of the web. Same problem with AMP pages. Google is annoying us - and has become a huge problem. (I am writing this on thorium right now, which is also chrome-based; Firefox does not allow me to play videos with audio as I don't have or use pulseaudio whereas the chrome-based browser does not care and my audio works fine - that shows you the level of incompetency at Mozilla. They don't WANT to compete against Google anymore. And did not want since decades. Ladybird unfortunately also is not going to change anything; after I critisized one of their decisions, they banned me. Well, that's a great way to try to build up an alternative when you deal with criticism via censorship - all before leaving alpha or beta already. Now imagine the amount of censorship you will get once millions of people WERE to use it ... something is fundamentally wrong with the whole modern web, and corporations have a lot to do with this; to a lesser extent also people but of course not all of them)

FeteCommunisteabout 10 hours ago
It would be really great if Google had a setting that allowed you to exclude certain domains from all searches by default. Like you, a YouTube video (or a Facebook page, or an Instagram or Twitter post) is basically never what I am looking for.
rrr_oh_manabout 3 hours ago
`-site:youtube.com`?
ripped_britchesabout 20 hours ago
I think local inference is great for many things - but this stance seems to conflate that you can’t have privacy with server side inference, and you can’t have nefariousness with client side inference. A device that does 100% client side inference can still phone home unless it’s disconnected from internet. Most people will want internet-connected agents right? And server side inference can be private if engineered correctly (strong zero retention guarantees, maybe even homomorphic encryption)
witnessmeabout 13 hours ago
The concern is real but the local solution is not ready. The author does not seem to think about that from the perspective of an "average consumer". I have been running my personal AI assistant on a consumer-grade computer, for almost an year now. It can do only one in thousand of the tasks that cloud models can do and that too at a much slow pace. Local ai assistant on consumer grade hardware is at least a few year away, and "always-on" is much further than that IMO.
stego-techabout 5 hours ago
Contextual irony aside, this is a big reason why the proposal of leveraging AI agents for workflow processing in lieu of using them to develop fixed software to perform the same functions has always struck me as weird, and of late come across as completely nonsensical.

If you're paying someone else to run the inference for these models, or even to build these models, then you're ultimately relying on their specific preferences for which tools, brands, products, companies, and integrations they prefer, not necessarily what you need or want. If and when they deprecate the model your agentic workflow is built on, you now have to rebuild and re-validate it on whatever the new model is. Even if you go out of your way to run things entirely locally with expensive inference kit and a full security harness to keep things in check, you could spend a lot less just having it vomit up some slopcode that one of your human specialists can validate and massage into perpetual functionality before walling it off on a VM or container somewhere for the next twenty years.

The more you're outsourcing workflows wholesale to these bots, the more you're making yourself vulnerable to the business objectives of whoever hosts and builds those bots. If you're just using it as a slop machine to get you the software you want and that IT can support indefinitely, then you're going to be much better off in the long run.

rrr_oh_manabout 3 hours ago
It’s the siren song of the lazy
ghywertellingabout 11 hours ago
One point I see less discussed, not related to the post, is "We never trained people to pay for software. If there existed proper global payment mechanism for software companies, the whole trajectory would look different. People are ok to pay 5$ for a coffee but not for software which makes their lives easier."
Animatsabout 19 hours ago
> Every company building your AI assistant is now an ad company

Apple? [1]

[1] https://www.apple.com/apple-intelligence/

kibwenabout 19 hours ago
Yes, Apple is an ad company. Their annual ad revenue is in the billions, and climbing every year.
bitpushabout 11 hours ago
Its always fascinating that HN crowd seems to be blind to Apple's very obvious transgressions.

Even the article makes the mistake. They paint every company with a broad brush ("all AI companies are ad companies") but for Apple they are more sympathetic "We can quibble about Apple".

Apple's reality distort field is so strong. People still think they are not in ad business. People still think they stand up to government, and folks chose to ignore hard evidence (Apple operates in China on CCP's pleasure. Apple presents a gold plaque to President Trump to curry favors and removes ICEBlock apps ..) There's no pushback, there's no spine.

Every company is disgusting. Apple is hypocritical and disgusting.

nfgrepabout 17 hours ago
> There needs to be a business model based on selling the hardware and software, not the data the hardware collects. An architecture where the company that makes the device literally cannot access the data it processes, because there is no connection to access it through.

Genuine Q: Is this business model still feasible? Its hard to imagine anyone other than apple sustaining a business off of hardware; they have the power to spit out full hardware refreshes every year. How do you keep a team of devs alive on the seemingly one-and-done cash influx of first-time-buyers?

Advertisement
aaron465about 12 hours ago
Advertising and AI colliding is gonna be horrible, but their post is also just an ad itself
HenryOsbornabout 19 hours ago
This was the inevitable endpoint of the current AI unit economics. When inference costs are this high and open-source models are compressing SaaS margins to zero, companies can't survive on standard subscription models. They have to subsidize the compute by monetizing the user's context window. The real liability isn't just ads; it's what happens when autonomous agents start making financial decisions influenced by sponsored retrieval data.
danny_codesabout 14 hours ago
Thing is there are OSS models that are near as good. So I don’t see why you’d stay for ad flop when you can just point openrouter one to the left.
emsignabout 9 hours ago
Enlightenment is man's emergence from his self-imposed immaturity. This is the age of enlightenment in reverse, the age of immaturity.
ardeaverabout 8 hours ago
Perhaps I'm not totally clear on how this particular device works, but it doesn't seem like it has no ability to connect to the Internet.

Honestly, I'd say privacy is just as much about economics as it is technical architecture. If you've taken outside funding from institutional venture capitalists, it's only a matter of time before you're asked to make even more money™, and you may issue a quiet, boring change to your terms and conditions that you hope no one will read... Suddenly, you're removing mentions of your company's old "Don't Be Evil" slogan.

HWR_14about 16 hours ago
I really dislike the preorder page. The fact that it's a deposit is in a different color that fades into the background, and it refers to it as a "price" multiple times. I don't know if it was intentionally deceptive, but it made me dislike this company.
bandramiabout 11 hours ago
The always-on future is absolutely not inevitable but I get that people have a lot of money riding on convincing people it is
alansaberabout 5 hours ago
It's a very profitable idea admittedly
zmmmmmabout 21 hours ago
It's interesting to me that there seems to be an implicit line being drawn around what's acceptable and what's not between video and audio.

If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.

But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.

I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?

BoxFourabout 21 hours ago
> Passively listening ambient audio is being treated as something that doesn't need active consent

That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.

Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.

zmmmmmabout 12 hours ago
Well, that's why I say "being treated"

I'm not aware of many bluetooth headphones that blink an obvious light just because they are recording. You can get a pair of sunglassses with a microphone and record with it and it does nothing to alert anybody.

Whether it's actually legal or not, as you say, varies - but it's clear where device manufactures think the line lies in terms of what tech they implement.

paxysabout 21 hours ago
AI "recording" software has never been tested in court, so no one can say what the legality is. If we are having a conversation (in a two party consent state) and a secret AI in my pocket generates a text transcript of it in real time without storing the audio, is that illegal? What about if it just generates a summary? What about if it is just a list of TODOs that came out of the conversation?
pclmulqdqabout 19 hours ago
Speech-to-text has gone through courts before. It's not a new technology. You're out of luck on sneaking the use of speech-to-text in 2-party consent states.
FeteCommunisteabout 22 hours ago
The level of trust I have in a promise made by any existing AI company that such a device would never phone home: 0.
NickJLangeabout 22 hours ago
This isn't a technology issue. Regulation is the only sane way to address the issue.

For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.

Nevermarkabout 19 hours ago
> This isn't a technology issue. Regulation is the only sane way to address the issue.

It is actually both a technology and regulation/law issue.

What can be solved with the former should be. What is left, solved with the latter. With the best cases where both consistently/redundantly uphold our rights.

I want legal privacy protections, consistent with privacy preserving technology. Inconsistencies create technical and legal openings for nefarious or irresponsible powers.

knallfroschabout 12 hours ago
You could start by not buying an always-on AI device. Just saying.

(The article is an AI ad.)

rimbo789about 22 hours ago
Ads in AI should be banned right now. We need to learn from mistakes of the internet (crypto, facebook) and aggressively regulate early and often before this gets too institutionalized to remove.
nancyminusoneabout 22 hours ago
They did learn. That's why they are adding ads.
doomslayer999about 22 hours ago
Boomers in government would be clueless on how to properly regulate and create correct incentives. Hell, that is still a bold ask for tech and economist geniuses with the best of intentions.
irishcoffeeabout 22 hours ago
Would that be the same cohort of boomers jamming LLMs up our collective asses? So they don’t understand how to regulate a technology they don’t understand, but fucking by golly you’re going to be left behind if you don’t use it?

This is like a shitty Disney movie.

doomslayer999about 22 hours ago
It's mostly SV grifters who shoved LLMs up our asses. They then get in cahoots with boomers in the government to create policies and "investment schemes" that inflate their stock in a ponzi-like fashion and regulate competition. Why do you think Trump has some no-name crypto firm, or why Thiel has Vance as his whipping boy, and Elon spend a fortune trying to get Trump to win? This is a multiparty thing, as most politicians are heavily bought and paid for.
kalterdevabout 22 hours ago
Ads (at least in the classical pre-AI sense) are by orders of magnitude better than preventive laws
Marsymarsabout 14 hours ago
I'm not sure how anyone could reasonably argue that Alaska would be orders of magnitude better off if they reversed the implementation of their billboard-banning ballot measure and put up billboards everywhere.
rimbo789about 20 hours ago
I trust corporations far far far less than government or lawmakers (who I also don’t trust). I know corporations will use ads in the most manipulative and destructive manner. Laws may be flawed but are worth the risk.
Advertisement
michelsedghabout 14 hours ago
Does anyone know how this device will filter out other voices like TV talking and stuff like that?
kleibaabout 22 hours ago
Always on is incompatible with data protection rights, such as the GDPR in Europe.
ajuhaszabout 22 hours ago
With cloud based inference we agree, this being just one more benefit of doing everything with "edge" inference (on device inside the home) as we do with Juno.
popalchemistabout 20 hours ago
Pretty sure a) it's not a matter of whether you agree and b) GDPR still considers always-on listening to be something the affected user has to actively consent to. Since someone in a household may not realize that another person's device is "always on" and may even lack the ability to consent - such as a child - you are probably going to find that it is patently illegal according to both the letter and the spirit of the law.

Is your argument that these affected parties are not users and that the GDPR does not require their consent?

Don't take this as hostility. I am 100% for local inference. But that is the way I understand the law, and I do think it benefits us to hold companies to a high standard. Because even such a device could theoretically be used against a person, or could have other unintended consequences.

doomslayer999about 22 hours ago
Who would buy OpenAI's spy device? I think a lot of public discourse and backlash about the greedy, anticompetitive, and exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
alansaberabout 5 hours ago
Loads of people? The Ari Alexa was tremendously successful as a consumer product no? It's the same premise but "better"
notatoadabout 17 hours ago
i'm continually surprised by how many people will buy and wear meta's AI spy sunglasses.

if there's a market for a face camera that sends everything you see to meta, there's probably a market for whatever device openAI launches.

janice1999about 20 hours ago
> ...exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.

I have little hope that is true. Don't expect privacy laws and boycott campaigns. That very same elite control the law via bribes to US politicians (and indirectly the laws of other counties via those politicians threats, see the ongoing watering down of EU laws). They also directly control public discourse via ownership of the media and mainstream communication platforms. What backlash can they really suffer?

sciencesamaabout 16 hours ago
We need an ai adblocker !!
freakynitabout 17 hours ago
I mean why is it so difficult for such companies to understand the core thing: irrespective of whether the data related to our daily lives gets processed on their servers or ours, we DON'T want it stored beyond a few minutes at max.

Even if these folks are giving away this device for 100% free, I'll still not keep it inside my house.

soaredabout 17 hours ago
Because storing, analyzing, and selling access to your data is massively profitable and they don’t care what the (not even vocal) privacy focused minority wants.
tempodoxabout 9 hours ago
This is just an ad for maximally intrusive “AI”. We’re quite inured by now to all the dystopian nightmares, so this barely even registers.
alfiedotwtfabout 11 hours ago
I’ve moved to Opencode, and I don’t see myself ever leaving it (if there were no alternatives ie AI glasses etc)
s09dfhksabout 4 hours ago
I've been using it a bit as well. I'm trying to figure out how they're making money off free tier users though. Any ideas?
lifestyleguruabout 17 hours ago
How long web search had been objective, nice, and helpful - 10 years? Now things are happening faster so there is max 5 years in total of AI prompt pretending that they want to help.
luxuryballsabout 17 hours ago
I guess it goes to show that real value is in the broader market to a certain extent, if they can’t just sell people the power they and up just earning a commission for helping someone else sell a product.
Sparkyteabout 13 hours ago
I mean google was always an ad company and search engine. SOOOO hasn't changed much.
alansaberabout 5 hours ago
Google can still (albeit with enormous difficulty) die as a company. If LLM search eclipses SEO and Gemini doesn't work out they're in trouble.
Advertisement
jeandejeanabout 12 hours ago
> The always-on future is inevitable

Well the consumers will decide. Some people will find it very useful, but some others will not necessarily like this... Considering how many times I heard people yelling "OK GOOGLE" for "the gate" to open, I'm not sure a continuous flow of heavily contextualized human conversation will necessarily be easier to decipher?

I know guys, AI is magic and will solve everything, but I wouldn't be surprised if it ordered me eggs and butter when I mentioned out loud I was out of it but actually happy about this because I was just about to go on vacations. My surprise when I'm back: melted butter and rotten eggs at my door...