It's triples all the way down
Disclaimer: Many of the features presented here are rather new and can not be found in the open-source version of Virtuoso.
Last time we saw how to share files and folders stored in the Virtuoso DAV system. Today we will protect and share data stored in Virtuoso’s Triple Store – we will share RDF data.
Virtuoso is actually a quadruple-store which means each triple lives in a named graph. In Virtuoso named graphs can be public or private (in reality it is a bit more complex than that but this view on things is sufficient for our purposes), public graphs being readable and writable by anyone who has permission to read or write in general, private graphs only being readable and writable by administrators and those to which named graph permissions have been granted. The latter case is what interests us today.
We will start by inserting some triples into a named graph as dba – the master of the Virtuoso universe:
This graph is now public and can be queried by anyone. Since we want to make it private we quickly need to change into a SQL session since this part is typically performed by an application rather than manually:
$ isql-v localhost:1112 dba dba Connected to OpenLink Virtuoso Driver: 07.10.3211 OpenLink Virtuoso ODBC Driver OpenLink Interactive SQL (Virtuoso), version 0.9849b. Type HELP; for help and EXIT; to exit. SQL> DB.DBA.RDF_GRAPH_GROUP_INS ('http://www.openlinksw.com/schemas/virtrdf#PrivateGraphs', 'urn:trueg:demo'); Done. -- 2 msec.
Now our new named graph urn:trueg:demo
is private
and its contents cannot be seen by anyone. We can easily test
this by logging out and trying to query the graph:
But now we want to share the contents of this named graph with someone. Like before we will use my LinkedIn account. This time, however, we will not use a UI but Virtuoso’s RESTful ACL API to create the necessary rules for sharing the named graph. The API uses Turtle as its main input format. Thus, we will describe the ACL rule used to share the contents of the named graph as follows.
@prefix acl: <http://www.w3.org/ns/auth/acl#> . @prefix oplacl: <http://www.openlinksw.com/ontology/acl#> . <#rule> a acl:Authorization ; rdfs:label "Share Demo Graph with trueg's LinkedIn account" ; acl:agent <http://www.linkedin.com/in/trueg> ; acl:accessTo <urn:trueg:demo> ; oplacl:hasAccessMode oplacl:Read ; oplacl:hasScope oplacl:PrivateGraphs .
Virtuoso makes use of the ACL ontology proposed by the
W3C and extends on it with several custom classes and
properties in the OpenLink ACL
Ontology. Most of this little Turtle snippet should be obvious:
we create an Authorization resource which grants Read access to
urn:trueg:demo
for agent http://www.linkedin.com/in/trueg. The only
tricky part is the scope. Virtuoso has the concept of ACL scopes
which group rules by their resource type. In this case the scope is
private graphs, another typical scope would be DAV resources.
Given that file rule.ttl contains the above resource we can post the rule via the RESTful ACL API:
$ curl -X POST --data-binary @rule.ttl -H"Content-Type: text/turtle" -u dba:dba http://localhost:8890/acl/rules
As a result we get the full rule resource including additional properties added by the API.
Finally we will login using my LinkedIn identity and are granted read access to the graph:
We see all the original triples in the private graph. And as before with DAV resources no local account is necessary to get access to named graphs. Of course we can also grant write access, use groups, etc.. But those are topics for another day.
Using ACLs with named graphs as described in this article requires some basic configuration. The ACL system is disabled by default. In order to enable it for the default application realm (another topic for another day) the following SPARQL statement needs to be executed as administrator:
sparql prefix oplacl: <http://www.openlinksw.com/ontology/acl#> with <urn:virtuoso:val:config> delete { oplacl:DefaultRealm oplacl:hasDisabledAclScope oplacl:Query , oplacl:PrivateGraphs . } insert { oplacl:DefaultRealm oplacl:hasEnabledAclScope oplacl:Query , oplacl:PrivateGraphs . };
This will enable ACLs for named graphs and SPARQL in general. Finally the LinkedIn account from the example requires generic SPARQL read permissions. The simplest approach is to just allow anyone to SPARQL read:
@prefix acl: <http://www.w3.org/ns/auth/acl#> . @prefix oplacl: <http://www.openlinksw.com/ontology/acl#> . <#rule> a acl:Authorization ; rdfs:label "Allow Anyone to SPARQL Read" ; acl:agentClass foaf:Agent ; acl:accessTo <urn:virtuoso:access:sparql> ; oplacl:hasAccessMode oplacl:Read ; oplacl:hasScope oplacl:Query .
I will explain these technical concepts in more detail in another article.
Posted at 14:21
Dropbox, Google Drive, OneDrive, Box.com – they all allow you to share files with others. But they all do it via the strange concept of public links. Anyone who has this link has access to the file. On first glance this might be easy enough but what if you want to revoke read access for just one of those people? What if you want to share a set of files with a whole group?
I will not answer these questions per se. I will show an alternative based on OpenLink Virtuoso.
Virtuoso has its own WebDAV file storage system built in. Thus, any instance of Virtuoso can store files and serve these files via the WebDAV API (and an LDP API for those interested) and an HTML UI. See below for a basic example:
This is just your typical file browser listing – nothing fancy. The fancy part lives under the hood in what we call VAL – the Virtuoso Authentication and Authorization Layer.
We can edit the permissions of one file or folder and share it with anyone we like. And this is where it gets interesting: instead of sharing with an email address or a user account on the Virtuoso instance we can share with people using their identifiers from any of the supported services. This includes Facebook, Twitter, LinkedIn, WordPress, Yahoo, Mozilla Persona, and the list goes on.
For this small demo I will share a file with my LinkedIn identity http://www.linkedin.com/in/trueg. (Virtuoso/VAL identifier people via URIs, thus, it has schemes for all supported services. For a complete list see the Service ID Examples in the ODS API documentation.)
Now when I logout and try to access the file in question I am presented with the authentication dialog from VAL:
This dialog allows me to authenticate using any of the supported authentication methods. In this case I will choose to authenticate via LinkedIn which will result in an OAuth handshake followed by the granted read access to the file:
It is that simple. Of course these identifiers can also be used in groups, allowing to share files and folders with a set of people instead of just one individual.
Next up: Sharing Named Graphs via VAL.
Posted at 14:21
Digitally signing Emails is always a good idea. People can verify that you actually sent the mail and they can encrypt emails in return. A while ago Kingsley showed how to sign emails in Thunderbird.I will now follow up with a short post on how to do the same in Evolution.
The process begins with actually getting an X.509 certificate including an embedded WebID. There are a few services out there that can help with this, most notably OpenLink’s own YouID and ODS. The former allows you to create a new certificate based on existing social service accounts. The latter requires you to create an ODS account and then create a new certificate via Profile edit -> Security -> Certificate Generator. In any case make sure to use the same email address for the certificate that you will be using for email sending.
The certificate will actually be created by the web browser, making sure that the private key is safe.
If you are a Google Chrome user you can skip the next step since Evolution shares its key storage with Chrome (and several other applications). If you are a user of Firefox you need to perform one extra step: go to the Firefox preferences, into the advanced section, click the “Certificates” button, choose the previously created certificate, and export it to a .p12 file.
Back in Evolution’s settings you can now import this file:
To actually sign emails with your shiny new certificate stay in the Evolution settings, choose to edit the Mail Account in question, select the certificate in the Secure MIME (S/MIME) section and check “Digitally sign outgoing messages (by default)“:
The nice thing about Evolution here is that in contrast to Thunderbird there is no need to manually import the root certificate which was used to sign your certificate (in our case the one from OpenLink). Evolution will simply ask you to trust that certificate the first time you try to send a signed email:
Posted at 14:21
Posted at 14:10
After almost two years spent working at Asemantics, I left it to join the Fondazione Bruno Kessler (FBK), a quite large research institute based in Trento.
These last two years have been amazing: I met very skilled and enthusiastic people working with them on a broad set of different technologies. Every day spent there has been an opportunity for me to learn something new from them, and at the very end they are now very good friends more than colleagues. Now Asemantics is part of the bigger Pro-netics Group.
Moved from Rome, I decided to follow Giovanni Tummarello and Michele Mostarda to launch from scratch a new research unit at FBK called “Web of Data”. FBK is a well-established organization with several units acting on a plethora of different research fields. Every day there is the opportunity to join workshops and other kind of events.
Just to give you an idea of how the things work here, in the April 2009 David Orban gave a talk here on “The Open Internet of Things” attended by a large number of researchers and students. Aside FBK, in Trento there is a quite active community hanging out around the Semantic Web.
“The Semantic Valley”, that’s how they call this euphoric movement around these technologies.
Back to me, the new “Web of Data” unit has joined the Sindice.com army and the last minute release of Any23 0.2 is only the first outcome of this joint effort on the Semantic Web Index between DERI and FBK.
In particularly, the Any23 0.2 release has been my first task here. It’s library, a service, an RDF distiller. It’s used on board the Sindice ingestion pipeline, it’s publicly available here and yesterday I spent a couple of minutes to write this simple bookmarklet:
javascript:window.open(‘http://any23.org/best/’%20+%20window.location);
Once on your browser, it returns a bunch of distilled RDF triples using the Any23 servlet if pressed on a Web page.
So, what’s next?
The Web of Data unit has just started. More things, from the next release of Sindice.com to other projects currently in inception, will see the light. I really hope to keep on contributing on the concrete consolidation of the Semantic Web, the Web of Data or Web3.0 or whatever we’d like to call it.
Posted at 14:10
This is a (short) technical post.
Everyday, I face the problem of getting some Linked Data URIs that uniquely identify a “thing” starting from an ambiguous, poor and flat keyword or description. One of the first step dealing with the development of application that consumes Linked Data is to provide a mechanism that allows to link our own data sets to one (or more) LoD bubble. To gain a clear idea on why identifiers matters I suggest you to read this note from Dan Brickley: starting from some needs we encountered within the NoTube project he clearly underlined the importance of LoD identifiers. Even if the problem of uniquely identifying words and terms falls in the biggest category usually known as term disambiguation, I’d like to clarify here, that what I’m going to explain is a narrow restriction of the whole problem.
What I really need is a simple mechanism that allows me to convert one specific type of identifiers to a set of Linked Data URIs.
For example, I need something that given a book ISBN number it returns me a set of URIs that are referring to that book. Or, given the title of a movie I expect back some URIs (from DBpedia or LinkedMDB or whatever) identifying and describing it in a unique way.
Isn’t SPARQL enough for you to do that?
Yes, obviously the following SPARQL query may be sufficient:
but what I need is something quicker that I may invoke as an HTTP GET like:
http://localhost:8080/resolver?value=978-0-374-16527-7&category=isbn
returning back to me a simple JSON:
{ "mappings": [
"http://dbpedia.org/resource/Gomorrah_%28book%29"],
"status": "ok"
}
But the real issue here is the code overhead necessary if you want to add other kind of identifiers resolution. Let’s imagine, for instance, that I already implemented this kind of service and I want to add another resolution category. What I should do is to hard code another SPARQL query, modify the code allowing to invoke it as a service and redeploy it.
I’m sure we could do better.
If we give a closer look at the above SPARQL query, we easily figure out that the problem could be highly generalized. In fact, often resolving such kind of resolution means perform a SPARQL query asking for URIs that have a certain value for a certain property. As dbprop:isbn for the ISBN case.
And this is what I did the last two days: The NoTube Identity Resolver.
A simple Web service (described in the figure below) fully customizable by simply editing an XML configuration file.
The resolvers.xml file allows you to provide a simple description of the resolution policy that will be accessible with a simple HTTP GET call.
Back to the ISBN example, the following piece of XML is enough to describe the resolver:
<resolver id=”2″
type=”normal”>
<category>isbn</category>
<endpoint>http://dbpedia.org/sparql</endpoint>
<lookup>dbpedia-owl:isbn</lookup>
<sameas>true</sameas>
<matching>LITERAL</matching>
</resolver>
Where:
Moreover, the NoTube Identity Resolver gives you also the possibility to specify more complex resolution policies through a SPARQL query as shown below:
<resolver id="3"
type="custom">
<category>movie</category>
<endpoint>http://dbpedia.org/sparql</endpoint>
<sparql><![CDATA[SELECT DISTINCT ?subject
WHERE { ?subject a <http://dbpedia.org/ontology/Film>.
?subject <http://dbpedia.org/property/title> ?title.
FILTER (regex(?title, "#VALUE#")) }]]>
</sparql>
<sameas>true</sameas>
</resolver>
In other words, every resolver described in the resolvers.xml file allows you to enable one kind of resolution mechanism without writing a line af Java code.
Do you want to try?
Just download the war package, get this resolvers.xml (or write your own), export the RESOLVERS_XML_LOCATION environment variable pointing to the folder where the resolvers.xml is located, deploy the war on your Apache Tomcat application server, start the application and try it out heading your browser to:
http://localhost:8080/notube-identity-resolver/resolver?value=978-0-374-16527-7&category=isbn
That’s all folks
Posted at 14:10
Just few days ago the popular ReadWriteWeb published a list of the
2009 Top Ten Semantic Web products as they did one year ago
with the
2008 Top Ten.
This two milestones are a good opportunity to make something similar to a balance. Or just to do a quick overview on what’s changed in the “Web of Data”, only one year later.
The 2008 Top Ten foreseen the following applications, listed in the same ReadWriteWeb order and enriched with some personal opinions.
Yahoo Search Monkey
It’s great. Search Monkey represents the first kind of next-generation search engines due its capability to be fully customized by third party developers. Recently, a breaking news woke up the “sem webbers” of the whole planet: Yahoo started to show structured data exposed with RDFa in the search results page. That news bounced all over the Web and those interested in SEO started to appreciate Semantic Web technologies for their business. But, unfortunately, at the moment I’m writing, RDFa is not showed anymore on search results due to an layout update that broke this functionality. Even if there are rumors on a imminent fixing of this, the main problem is the robustness and the reliability of that kind of services: investors need to be properly guaranteed on the effectiveness of their investments.
Powerset
Probably, this neat application has became really popular when it has been acquired by Microsoft. It allows to make simple natural language queries like “film where Kevin Spacey acted” and, a first glance, the results seems really much better than other traditional search engines. Honestly I don’t really know what are the technologies they are using to do this magic. But, it would be nice to compare their results with an hypothetical service that translates such human text queries in a set of SPARQL queries over DBpedia. Anyone interested in do that? I’ll be more than happy to be engaged in a project like that.
Open Calais
With a large and massive branding operation these guys built the image of this service as it be the only one fitting everyone’s need when dealing with semantic enrichment of unstructured free-texts. Even this is partly true (why don’t mentioning the Apache UIMA Open Calais annotator?), there are a lot of other interesting services that are, for certain aspects, more intriguing than the Reuters one. Don’t believe me? Let’s give a try to AlchemyAPI.
Dapper
I have to admit my ignorance here. I never heard about it, but it looks very very interesting. Certainly this service that offers, mainly, some sort of semantic advertisement is more than promising. I’ll keep an eye on it.
Hakia
Down at the moment I’m writing.
Tripit
Many friends of mine are using it and this could be enough to give it popularity. Again, I don’t know if they are using some of the W3C Semantic Web technologies to models their data. RDF or not, this is a neat example of semantic web application with a good potential: is this enough to you?
BooRah
Another case of personal ignorance. This magic is, mainly, a restaurant review site. BooRah uses semantic analysis and natural language processing to aggregate reviews from food blogs. Because of this, BooRah can recognize praise and criticism in these reviews and then rates restaurants accordingly to them. One criticism? The underlying data are perhaps not so much rich. Sounds impossible to me that searching for “Pizza in Italy” returns nothing.
Blue Organizer (or GetGlue?)
It’s not a secret that I consider Glue one of the most innovative and intriguing stuff on the Web. And when it appeared on the ReadWriteWeb 10 Top Semantic Web applications was far away from what is now. Just one year later, GetGlue (Blue Organizer seems to be the former name) appears as a growing and live community of people that realized how is important to wave the Web with the aim of a tool that act as a content cross-recommender. Moreover GetGlue provides a neat set of Web APIs that I’m widely using within the NoTube project.
Zemanta
A clear idea, a powerful branding and a well designed set of services accessible with Web APIs make Zemanta one of the most successful product on the stage. Do I have to say anything more? If you like Zemanta I suggest you to keep an eye also on Loomp, a nice stuff presented at the European Semantic Technology Conference 2009.
UpTake.com
Mainly, a semantic search engine over a huge database containing more than 400,000 hotels in the US. Where’s the semantic there? Uptake.com crawls and semantically extracts the information implicitly hidden in those records. A good example of how innovative technologies could be applied to well-know application domains as the hotels searching one.
On year later…
Indubitably, 2009 has been ruled by the Linked Data Initiative, as I love to call it. Officially Linked Data is about “using the Web to connect related data that wasn’t previously linked, or using the Web to lower the barriers to linking data currently linked using other methods” and, if we look to its growing rate, could be simple to bet on it success.
Here is the the 2009 top-ten where I omitted GetGlue, Zemanta and OpenCalais since they already appeared also in the 2008 edition:
Google Search Options and Rich Snippets
When this new feature of Google has been announced the whole Semantic Web community realized that something very powerful started to move along. Google Rich Snippet makes use of the RDFa contained in the HTML Web pages to power rich snippets feature.
Feedly
It’s a very very nice feeds aggregator built upon Google Reader, Twitter and FriendFeed. It’s easy to use, nice and really useful (well, at least it seems so to me) but, unfortunately, I cannot see where is the Semantic aspects here.
Apture
This JavaScript cool stuff allows publishers to add contextual information to links via pop-ups which display when users hover over or click on them. Watching HTML pages built with the aid of this tool, Apture closely remembers me the WordPress Snap-Shot plugin. But Apture seems richer than Snap-Shot since it allows the publishers to directly add links and other stuff they want to display when the pages are rendered.
BBC Semantic Music Project
Built upon Musicbrainz.org (one of the most representative Linked Data cloud) it’s a very remarkable initiative. Personally, I’m using it within the NoTube project to disambiguate Last.fm bands. Concretely, given a certain Last.fm band identifier, I make a query to the BBC /music that returns me a URI. With this URI I ask the sameas.org service to give me other URIs referring to the same band. In this way I can associate to every Last.fm bands a set of Linked Data URIs where obtain a full flavor of coherent data about them.
Freebase
It’s an open, semantically marked up shared database powered by Metaweb.com a great company based in San Francisco. Its popularity is growing fast, as ReadWriteWeb already noticed. Somehow similar to Wikipedia, Freebase provides all the mechanisms necessary to syndicate its data in a machine-readable form. Mainly, with RDF. Moreover, other Linked Data clouds started to add owl:sameAs links to Freebase: do I have to add something else?
Dbpedia
DBpedia is the nucleus of the Web of Data. The only thing I’d like to add is: it deserves to be on the ReadWriteWeb 2009 top-ten more than the others.
Data.gov
It’s a remarkable US government initiative to “increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government.”. It’s a start and I dream to see something like this even here in Italy.
So what’s up in the end?
It’s my opinion that the 2009 has been the year of Linked Data. New clouds born every month, new links between the already existent ones are established and a new breed of developers are being aware of the potential and the threats of Linked Data consuming applications. It seems that the Web of Data is finally taking shape even if something strange is still in the air. First of all, if we give a closer look to the ReadWriteWeb 2009 Top Ten I have to underline that 3 products on 10 already were also in the 2008 chart. Maybe the popular blog liked to stress on the progresses that these products made but it sound a bit strange to me that they forgot nice products such as the FreeMix, Alchemy API, Sindice, OpenLink Virtuoso and the BestBuy.com usage of GoodRelations ontology. Secondly, 3 products listed in the 2009 chart are public-funded initiatives that, even if is reasonable due to the nature of the products, it leave me with the impression that private investors are not in the loop yet.
What I expect from the 2010, then?
A large and massive rush to using RDFa for SEO porpoises, a sustained grow of Linked Data clouds and, I really hope, the rise of a new application paradigm grounded to the consumption of such interlinked data.
Posted at 14:10
A couple of years ago, during his live show, the popular italian blogger and activist Beppe Grillo provided a quick demonstration about how the Web concretely realizes the “six degrees of separation”. The italian blogger, today a Web enthusiast, shown that it was possible to him to get in contact with someone very famous using a couple of different websites: imdb, Wikipedia and few others. Starting from a movie where he acted, he could reach the movie producer and the producer could be in contact with another actor due to previous work with this latter and so on.
This demonstration consisted in a series of links that were opened leading to some Web pages containing information where extract the relationships that the showman wants to achieve.
This gig came back to my mind while I was thinking on how, what I call the “Linked Data Philosophy”, is impacting on the traditional Web and I imagined what Beppe Grillo could show nowadays.
Just the following, simple, trivial and short SPARQL query:
construct {
?actor1 foaf:knows ?actor2
}
where {
?movie dbpprop:starring ?actor1.
?movie dbpprop:starring ?actor2.
?movie a dbpedia-owl:Film.
FILTER(?actor1 = <http://dbpedia.org/resource/Beppe_Grillo>)
}
Although Beppe is a great comedian it may be hard also for him making people laugh with this. But, the point here is not about laughs but about data: in this sense, the Web of Data is providing an outstanding and an extremely powerful way to access to incredible twine of machine readable interlinked data.
Recently, another nice and remarkable italian initiative appeared on the Web: OpenParlamento.it. It’s, mainly, a service where the Italian congressmen are displayed and they are positioned on a chart basing on the similarity of their votes on law proposals.
Ok. Cool. But how the Semantic Web could improve this stuff?
First of all, it would be very straightforward to provide a SPARQL endpoint providing some good RDF for this data. Like the following example:
<rdf:RDF>
<rdf:Description rdf:about=”http://openparlamento.it/senate/Mario_Rossi”>
<rdf:type
rdf:resource=”http://openparlamento.it/ontology/Congressman”/>
<foaf:name>Mario
Rossi</foaf:name>
<foaf:gender>male</foaf:gender>
<openp:politicalGroup
rdf:resource=”http://openparlamento.it/groups/Democratic_Party”/>
<owl:sameas
rdf:resource=”http://dbpedia.org/resource/Mario_Rossi”/>
</rdf:Description>
</rdf:RDF>
where names, descriptions, political belonging and more are provided. Moreover a property called openp:similarity could be used to map closer congressmen, using the same information of the already cited chart.
Secondly, all the information about congressmen are published on the official Italian chambers web site. Wrapping this data, OpenParlamento.it could provide an extremely exhaustive set of official information and, more important, links to DBpedia will be the key to get a full set of machine processable data also from other Linked Data clouds.
How to benefits from all of this? Apart the fact of employing a cutting-edge technology to syndicate data, everyone who wants link the data provided by OpenParlamento.it on his web pages can easily do it using RDFa.
With these technologies as a basis, a new breed of applications (like web crawlers, for those interested in SEO) will access and process these data in a new, fashionable and extremely powerful way.
Is the time for those guys to embrace the Semantic Web , isn’t it?
Posted at 14:10
For a presentation at work where it’s tricky to add video but an image is ok, gifski worked brilliantly for converting a video to a gif. Even with the defaults it was fine. I needed to tweak it a bit as I needed it a bit smaller, -W worked great for that for me, but there are a bunch of other ways too.
Here’s one of the Montpelier partridge from January last year.
Posted at 14:09
Tarim and I have been trying to get a LoRaWAN network up and running in Bristol using some of the old Bristol Wireless antenna locations. First step for me was in January when we got together and tried to get a Raspberry Pi Gateway working, with so much #fayle – a subtly broken Pi, a dodgy PSU connector, and I did not know that the Raspberry Pi imager process had changed for Bullseye (you have to set a user in settings, and enable ssh there – you can also put the wifi details in, so it’s handy if you know about it).
Aaanyway for #mayke (now on Mastodon) I’ve been trying for a couple of days to get a TTGO LoRa32 OLED v1.3(?) I bought ages ago to work with the Pi gateway. In summary: argh. there’s so many partial examples around and different naming things and allsorts. But are some notes on what works.
On the Raspberry Pi: 3B+ and a
IC880A board that Tarim had – then install Bullseye (with ssh
access and wifi and a pi user) and then install using The Things Network (TTN)’s
example gateway
instructions. All fine. My only daftness here was finding this
command: /opt/ttn-station/bin/station -p
and assuming
(why?) that I was tailing the logs instead of running another
instance on top of the systemctl one. Which led to all sorts of
weird errors, including ones related to not resetting the device
e.g.
[lgw_receive:1143] CONCENTRATOR IS NOT RUNNING, START IT
BEFORE RECEIVING
…
[HAL:INFO] [lgw_spi_close:159] Note: SPI port
closed
…
[lgw_start:764] Failed to setup sx125x radio for RF chain
0
etc.
D’oh.
The TTGO was more tricky. There seem to be multiple libraries at multiple levels of abstraction and I wanted one that was Arduino-IDE compatible. It’s really hard to find out what pin mapping you need for these slightly obscure (and superceded) TTGO boards. Then there’s the difference between LoRaWAN Specification 1.0.3 and LoRaWAN Specification 1.1.1. After a while I realised that the MCCI_LoRaWAN_LMIC_library (0.9.2) I was using in the code I had found on the internet was made for 1.0.3 – and then configuring a TTN device was muuch easier with fewer baffling options.
One final self-own by my frenetic searching of forums looking for a bit of code with the right pin mapping for the TTGO
I somehow found some old code (I think it was
this – don’t use it, 5 years’ old! – which I think is based on
an old version of
this, but adapted for the TTGO) which didn’t recognise all the
event types from TTN. Updated below, basically adding this in
setup()
LMIC_setAdrMode(1);
LMIC_setLinkCheckMode(1);
in setup()
and LMIC_setLinkCheckMode(1)
again in case
EV_JOINED
.
Thank you
TTN forum users, and
again.
A couple more things – though there are probably more I’ve forgotten.
./project_config/lmic_project_config.h
in MCCI_LoRaWAN_LMIC_library
on your machine to pick the right region (on a mac, mine was in
/Users/[me]/Documents/Arduino/libraries/MCCI_LoRaWAN_LMIC_library/project_config/lmic_project_config.h
)LSB is little- MSB is big- and <> switches between chars with the preceding 0x business and without. DEVEUI and APPEUI are little and APPKEY is big.
I somewhat enjoyed the detective work and even read some of TFM. So a happy #mayke for me.
The final code I used:
// MIT License
// https://github.com/gonzalocasas/arduino-uno-dragino-lorawan/blob/master/LICENSE
// Based on examples from https://github.com/matthijskooijman/arduino-lmic
// Copyright (c) 2015 Thomas Telkamp and Matthijs Kooijman
#include <Arduino.h>
#include "lmic.h"
#include <hal/hal.h>
#include <SPI.h>
#define LEDPIN 2
unsigned int counter = 0;
char TTN_response[30];
// This EUI must be in little-endian format, so least-significant-byte
// first. When copying an EUI from ttnctl output, this means to reverse
// the bytes.
// Copy the value from Device EUI from the TTN console in LSB mode.
static const u1_t PROGMEM DEVEUI[8]= { 0x.., 0x.., .. };
void os_getDevEui (u1_t* buf) { memcpy_P(buf, DEVEUI, 8);}
// Copy the value from Application EUI from the TTN console in LSB mode
static const u1_t PROGMEM APPEUI[8]= { 0x.., 0x.., .. };
void os_getArtEui (u1_t* buf) { memcpy_P(buf, APPEUI, 8);}
// This key should be in big endian format (or, since it is not really a
// number but a block of memory, endianness does not really apply). In
// practice, a key taken from ttnctl can be copied as-is. Anyway its in MSB mode.
static const u1_t PROGMEM APPKEY[16] = { 0x.., .. };
void os_getDevKey (u1_t* buf) { memcpy_P(buf, APPKEY, 16);}
static osjob_t sendjob;
// Schedule TX every this many seconds (might become longer due to duty
// cycle limitations).
const unsigned TX_INTERVAL = 120;
// Pin mapping
const lmic_pinmap lmic_pins = {
.nss = 18,
.rxtx = LMIC_UNUSED_PIN,
.rst = 14,
.dio = {26, 33, 32} // Pins for the Heltec ESP32 Lora board/ TTGO Lora32 with 3D metal antenna
};
void do_send(osjob_t* j){
// Payload to send (uplink)
static uint8_t message[] = "Hello OTAA!";
// Check if there is not a current TX/RX job running
if (LMIC.opmode & OP_TXRXPEND) {
Serial.println(F("OP_TXRXPEND, not sending"));
} else {
// Prepare upstream data transmission at the next possible time.
LMIC_setTxData2(1, message, sizeof(message)-1, 0);
Serial.println(F("Sending uplink packet..."));
digitalWrite(LEDPIN, HIGH);
}
// Next TX is scheduled after TX_COMPLETE event.
}
void onEvent (ev_t ev) {
Serial.print(os_getTime());
Serial.print(": ");
Serial.print(ev);
Serial.print(": ");
switch(ev) {
case EV_SCAN_TIMEOUT:
Serial.println(F("EV_SCAN_TIMEOUT"));
break;
case EV_BEACON_FOUND:
Serial.println(F("EV_BEACON_FOUND"));
break;
case EV_BEACON_MISSED:
Serial.println(F("EV_BEACON_MISSED"));
break;
case EV_BEACON_TRACKED:
Serial.println(F("EV_BEACON_TRACKED"));
break;
case EV_JOIN_FAILED:
Serial.println(F("EV_JOIN_FAILED"));
break;
case EV_REJOIN_FAILED:
Serial.println(F("EV_REJOIN_FAILED"));
break;
case EV_LOST_TSYNC:
Serial.println(F("EV_LOST_TSYNC"));
break;
case EV_RESET:
Serial.println(F("EV_RESET"));
break;
case EV_RXCOMPLETE:
// data received in ping slot
Serial.println(F("EV_RXCOMPLETE"));
break;
case EV_LINK_DEAD:
Serial.println(F("EV_LINK_DEAD"));
break;
case EV_LINK_ALIVE:
Serial.println(F("EV_LINK_ALIVE"));
break;
case EV_SCAN_FOUND:
Serial.println(F("EV_SCAN_FOUND"));
break;
case EV_TXSTART:
Serial.println(F("EV_TXSTART"));
break;
case EV_TXCANCELED:
Serial.println(F("EV_TXCANCELED"));
break;
case EV_RXSTART:
// do not print anything -- it wrecks timing
break;
case EV_TXCOMPLETE:
Serial.println(F("EV_TXCOMPLETE (includes waiting for RX windows)"));
if (LMIC.txrxFlags & TXRX_ACK) {
Serial.println(F("Received ack"));
}
if (LMIC.dataLen) {
int i = 0;
Serial.print(F("Data Received: "));
Serial.write(LMIC.frame+LMIC.dataBeg, LMIC.dataLen);
Serial.println();
Serial.println(LMIC.rssi);
for ( i = 0 ; i < LMIC.dataLen ; i++ )
TTN_response[i] = LMIC.frame[LMIC.dataBeg+i];
TTN_response[i] = 0;
}
// Schedule next transmission
os_setTimedCallback(&sendjob, os_getTime()+sec2osticks(TX_INTERVAL), do_send);
digitalWrite(LEDPIN, LOW);
// Schedule next transmission
os_setTimedCallback(&sendjob, os_getTime()+sec2osticks(TX_INTERVAL), do_send);
break;
case EV_JOINING:
Serial.println(F("EV_JOINING: -> Joining..."));
break;
case EV_JOINED: {
Serial.println(F("EV_JOINED"));
LMIC_setLinkCheckMode(1);
}
break;
default:
Serial.println(F("Unknown event"));
Serial.print(ev);
Serial.print("\n");
break;
}
}
void setup() {
Serial.begin(115200);
delay(2500); // Give time to the serial monitor to pick up
Serial.println(F("Starting..."));
// Use the Blue pin to signal transmission.
pinMode(LEDPIN,OUTPUT);
// LMIC init
os_init();
// Reset the MAC state. Session and pending data transfers will be discarded.
LMIC_reset();
LMIC_setClockError(MAX_CLOCK_ERROR * 1 / 100);
// Set up the channels used by the Things Network, which corresponds
// to the defaults of most gateways. Without this, only three base
// channels from the LoRaWAN specification are used, which certainly
// works, so it is good for debugging, but can overload those
// frequencies, so be sure to configure the full frequency range of
// your network here (unless your network autoconfigures them).
// Setting up channels should happen after LMIC_setSession, as that
// configures the minimal channel set.
LMIC_setupChannel(0, 868100000, DR_RANGE_MAP(DR_SF12, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(1, 868300000, DR_RANGE_MAP(DR_SF11, DR_SF7B), BAND_CENTI); // g-band
LMIC_setupChannel(2, 868500000, DR_RANGE_MAP(DR_SF10, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(3, 867100000, DR_RANGE_MAP(DR_SF9, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(4, 867300000, DR_RANGE_MAP(DR_SF8, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(5, 867500000, DR_RANGE_MAP(DR_SF7, DR_SF7), BAND_CENTI); // g-band
LMIC_setupChannel(6, 867700000, DR_RANGE_MAP(DR_SF7, DR_SF7), BAND_CENTI); // g-band
// TTN defines an additional channel at 869.525Mhz using SF9 for class B
// devices' ping slots. LMIC does not have an easy way to define set this
// frequency and support for class B is spotty and untested, so this
// frequency is not configured here.
// Disable link check validation
///LMIC_setLinkCheckMode(0);
LMIC_setAdrMode(1);
LMIC_setLinkCheckMode(1);
//LMIC_setClockError(MAX_CLOCK_ERROR * 1 / 100);
// TTN uses SF9 for its RX2 window.
LMIC.dn2Dr = DR_SF9;
// Set data rate and transmit power for uplink (note: txpow seems to be ignored by the library)
//LMIC_setDrTxpow(DR_SF11,14);
LMIC_setDrTxpow(DR_SF9,14);
// Start job
do_send(&sendjob); // Will fire up also the join
}
void loop() {
os_runloop_once();
}
Posted at 14:09
I keep seeing these two odd time effects in my life and wondering if they are connected.
The first is that my work-life has become either extremely intense – and I don’t mean long hours, I mean intense brainwork for maybe a week – that wipes me out – and then the next is inevitably slower and less intense. Basically everything gets bunched up together. I feel like this has something to do with everyone working from home, but I’m not really sure how to explain it (though it reminds me of my time at Joost where we’d have an intense series of meetings with everyone together every few months, because we were distributed. But this type is not organised, it just happens). My partner pointed out that this might simply be poor planning on my part (thanks! I’m quite good at planning actually).
The second is something we’ve noticed at the Cube – people are not committing to doing stuff (coming to an event, volunteering etc) until very close to the event. Something like 20-30% of our tickets for gigs are being sold the day before or on the day. I don’t think it’s people waiting for something better. I wonder if it’s Covid-related uncertainty? (also 10-15% don’t turn up, not sure if that’s relevant).
Anyone else seeing this type of thing?
Posted at 14:09
More for my reference than anything else. I’ve been trying to get the toolchain set up to use a Sparkfun Edge. I had the Edge, the Beefy3 FTDI breakout, and a working USB cable.
This worked great for the speech example, for me (although the actual tensorflow part never understands my “yes” “no” etc, but anyway, I was able to successfully upload it)
$ git clone --depth 1 https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ gmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=sparkfun_edge micro_speech_bin
$ cp tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/keys_info0.py tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/keys_info.py
$ python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/create_cust_image_blob.py --bin tensorflow/lite/micro/tools/make/gen/sparkfun_edge_cortex-m4_micro/bin/micro_speech.bin --load-address 0xC000 --magic-num 0xCB -o main_nonsecure_ota --version 0x0
$ python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/create_cust_wireupdate_blob.py --load-address 0x20000 --bin main_nonsecure_ota.bin -i 6 -o main_nonsecure_wire --options 0x1
$ export BAUD_RATE=921600
$ export DEVICENAME=/dev/cu.usbserial-DN06A1HD
$ python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/uart_wired_update.py -b ${BAUD_RATE} ${DEVICENAME} -r 1 -f main_nonsecure_wire.bin -i 6
But then I couldn’t figure out how to generalise it to use other examples – I wanted to use the camera because ages ago I bought a load of tiny cameras to use with the Edge.
So I tried this guide, but couldn’t figure out where it the installer had put the compiler. Seems basic but….??
So in the end I used the first instructions to download the tools, and then the second to actually do the compilation and installation on the board.
$ find . | grep lis2dh12_accelerometer_uart
# you might need this -
# mv tools/apollo3_scripts/keys_info0.py tools/apollo3_scripts/keys_info.py
$ cd ./tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/boards_sfe/edge/examples/lis2dh12_accelerometer_uart/gcc/
$ export PATH="/Users/libbym/personal/mayke2021/tensorflow/tensorflow/lite/micro/tools/make/downloads/gcc_embedded/bin/:$PATH"
$ make clean
$ make COM_PORT=/dev/cu.usbserial-DN06A1HD bootload_asb ASB_UPLOAD_BAUD=921600
etc. Your COM port will be different, find it using
ls /dev/cu*
If like me the FTDI serial port KEEPS VANISHING ARGH – this may help (I’d installed 3rd party FTDI drivers ages ago and they were conflicting with the Apple’s ones. Maybe. Or the reboot fixed it. No idea).
Then you have to use a serial programme to get the image. I used the arduino serial since it was there and then copy and pasted the output into a textfile, at which point you can use
tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/boards_sfe/common/examples/hm01b0_camera_uart/utils/raw2bmp.py
to convert it to a png. Palavers.
Posted at 14:09
I got one of these lovely M5StickCs for a present, and had a play with it as part of Makevember. I wanted to make a “push puppet” (one of those toys that you push upwards and they collapse) that reacted to Slack commands. Not for any reason really, though I like the idea of tiny colleagues that stand up when addressed on slack. Makevember doesn’t need a reason. Or at any rate, it doesn’t need a good reason.
Here are some notes about https and websockets on the ESP32 pico which is the underlying board for the M5StickC.
I made a “slack
wobbler” a couple of years ago, also in makevember – an ESP8266
that connected to slack, then wobbled when someone was mentioned,
using a servo. Since then I ran into some https problems, obviously
also encountered by Jeremy21212121 who fixed it using a modified
version of a websockets server. This works for the ESP8266 –
turns out you can also get the same result using
httpsClient.setInsecure()
using BearSSL. I’ve put an
example of that
here.
For ESP32 it seems a bit different.
As far as I can tell you need the certificate not the
fingerprint in this case. You can get it using openssl
s_client -connect api.slack.com:443
For ESP32 you also need to use the correct libraries for wifi and wifimulti. The websocket client library is this one.
And a final note – the M5StickC is very cool but doesn’t enable you to use many of its GPIO ports. The only one I can find that allows you to use a servo directly is on the Grove connector, which I bodged some female jumper wires into, though you can get a grove to servo converter (there are various M5Stick hats you can use for multiple servos). Here’s some code. And a video.
Posted at 14:09
Makevember and lockdown have encouraged me to make an improved version of libbybot, which is a physical version of a person for remote participation. I’m trying to think of a better name – she’s not all about representing me, obviously, but anyone who can’t be somewhere but wants to participate. [update Jan 15: she’s now called “sock_puppet”].
This one is much, much simpler to make, thanks to the addition of a pan-tilt hat and a simpler body. It’s also more expressive thanks to these lovely little 5*5 led matrixes.
Her main feature is that – using a laptop or phone – you can see, hear and speak to people in a different physical place to you. I used to use a version of this at work to be in meetings when I was the only remote participant. That’s not much use now of course. But perhaps in the future it might make sense for some people to be remote and some present.
New recent features:
* ish
**a sock
I’m still writing docs, but the repo is here.
Posted at 14:09
A couple of people have asked me about my presence-robot-in-a-lamp, libbybot – unsurprising at the moment maybe – so I’ve updated the code in github to use the most recent RTCMultiConnection (webRTC) library and done a general tidy up.
I gave a presentation at EMFCamp about it a couple of years ago – here are the slides:
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09
Posted at 14:09