SNA, Wikipedia, and the Hellenistic World

Part of my work on the Big Ancient Mediterranean project involves creating a general software framework that can display social networks produced with Gephi, either as “stand alone” displays or integrated with geographic and textual information.

I created this particular module, “Hellenistic” Royal Relationships, to highlight the “stand alone” social network analysis (SNA) capabilities of BAM, and to serve as the start of a more generalized Hellenistic prosopography. Some other, more specialized work has been done in this direction; notably Trismegistos Networks and the efforts of SNAP:DRGN to create data standards for describing prosopographies and linking to other projects. Eventually this module will take advantage of these efforts, and provide stable URIs for its own data.

I envision this module serving several purposes. First, it provides an interesting visual representation of data contained within Wikipedia articles, including textual data that is not “linked” to other entries  and therefore not discoverable by automated means. It serves as a quick reference for familial relationships, and provides an entry point for further exploration and study. This project has created a “core” of relationships that can be further expanded by different projects. It also can function as a check on Wikipedia data; some of the relationships here are highly controversial, or could even be wrong.

For future development, the next steps are to add more data on the subjects, including birth / death / reigning dates and a time-line browser based on those dates. As mentioned above, more work needs to be done to take advantage of linked data projects, including linkages to Pleiades locations where appropriate, linkages to Nomisma IDs if the monarch minted coins, and the presentation of the underlying data in a format that is compatible with SNAP:DRGN. Finally, I would like to develop a method for the automatic discovery and extraction of relationships described in Wikipedia articles, which is an interesting, but difficult, problem.

Asia Minor in the Second Century CE: A New Wall Map From the Ancient World Mapping Center

The Ancient World Mapping Center at UNC Chapel Hill has just released a 1:750,000 scale map of Roman Asia Minor in the Second Century CE under CC BY 4.0. Several years in the making, this map is a collaboration between several different directors of the center (including myself), domain experts, and other historians, and it represents the current state of knowledge about Roman Asia Minor in this period.

Intended for class or research use, the map can be printed, distributed digitally, or remixed as desired. It is the same scale and general size as the AWMC’s other wall map offerings through Routledge, so if you are so inclined, you can add it to a “mega-map” of the Mediterranean World. Demand for the map was so high that dropbox suspended our public folder; you can e-mail the AWMC (awmc@unc.edu) for a new direct download link.

Although this project is a static map of Asia Minor, the data behind the map can be found at the AWMC GitHub page. In a future post, I’ll write up how to use the AWMC geodata and the BAM framework to make an interactive version of this map which you can modify for your own needs.

#ReportHate, whywereafraid, and SNA

With increasing social media incidents of election-related violence on twitter and social media, I decided to perform a quick network analysis of #ReportHate and whywereafraid (which, as of this writing, has removed its twitter link from its site). I am interested in examining the development of these online communities, if there are significant overlaps between them, and if there are opportunities for increased cooperation.

maincomp
The main component of the #ReportHate network. Dr. Singh’s community is in purple, the SPLC is in green, and the alt-right grouping is in red.

First, I looked at each network in isolation. I started with the network formed around #ReportHate, which consists of 2,781 nodes, 4,217 edges and 79 components. (A quick network primer: nodes are users or hashtags, while edges represent users mentioning a hashtag or another user. Components are parts of the graph where every node can trace a path through a number of edges to another node, and degree is the number of edges connecting a node to other nodes).

Surprisingly to me, the SPLC (@SPLCENTER) is not the node with the highest degree; that honor belongs to Dr. Simran Jeet Singh (@SIKHPROF), a professor of religion at Trinity University, despite SPLC’s approximate 9-1 advantage in followers (96.3 thousand to 10.7 thousand). It will be interesting to see if this disparity closes as more individuals are aware of the hashtag.

The top ten nodes by degree are dominated by two very different philosophies. @SIKHPROF, @SPLCENTER, @SHAUNKING, @AMYWESTERVELT, @TRUMPSWORLD2016, and @THIERISTAN are certainly aligned with progressive causes and appear to be supporters of the SPLC’s efforts to accurately report hate crimes. However, the next major node on the graph, @STOPHATECRIMEZ, appears to be an alt-right account (including an emoticon frog as a stand-in for Pepe the Frog), which tweets links to accounts of violence against Trump voters (dominated by links to YouTube) and refutations of violence committed by Trump supporters. The accounts that retweeted this account likewise seem to be dominated by alt-right and far right wing individuals, and the hashtag #HATECRIME is almost exclusively used by this group.

Moving on from the alt-right component of the graph, it is apparent that there are several large clusters of SPLC supporters that as of yet do not have much interconnectivity. As this is a relatively new hashtag, I expect a growth of connections between clusters; if not, there is is an opportunity for the “central” nodes of each cluster to reach out to each other and establish a more robust online community. Another potential issue are nodes that are otherwise disconnected from the network; if these individuals are tweeting about incidents, it would be beneficial to reach out (virtually) and bring them into the larger #ReportHate network.

Unlike the #ReportHate network, with a strong connected component, the whywereafraid network is far more dispersed and much smaller. There are 992 nodes and 938 edges, with 151 components. The node with the highest degree count is Patrick Kingsley (@PATRICKKINGSLEY), a foreign correspondent with the Guardian paper; his high degree is the result of his tweet linking to the whywereafraid tumblr account.

afraid-no-label
The whywereafraid network

The other two of the top three nodes, @ADAMPOWERS and @JAMIETWORKOWSKI, seem to be allied with the progressive movement. The next node with the highest degree is the official account of Donald Trump (@REALDONALDTRUMP). However, this is due to other twitter users castigating him over election violence.

I then placed the networks together, to see if there was any overlap between the two growing communities. There are 26 users and 19 hashtags in common; when the entire network is placed in a graph, the node with the node with the highest degree of the 26 is @SHAUNKING, who is mentioned four times by other uses to bring his attention to whywereafraid. There are other tentative connections, but for the most part the two networks are very distinct, with little cross conversation.

connected
The combined network. Edges that are from the #ReportHate data are in red, edges from the whywereafraid data are in blue.

This represents a danger and an opportunity for the supporters of #ReportHate and whywereafraid. As the #ReportHate and whyweareafraid networks grow, there are likely to have increased links due to shared common interests, but there is the real possibility that many users will remain tied to their initial choice of hashtag, and not participate in the wider community or conversation. If nodes that are structurally important (a high betweenness centrality) in the #ReportHate graph, such as @SIKHPROF and @AMYWESTERVELT, could be brought into conversation with the major nodes of the whyweareafraid graph, then there is a good chance to merge the two networks, increasing awareness, mutual support, and an increased online presence.

#NoDAPL Twitter Analysis

Introduction:

dapl_routes_map_large
Map By Carl Sack

The approximately 1,172 mile Dakota Access Pipeline1 has been highly controversial since its public unveiling in 2014.2 The Standing Rock Sioux and allied organizations took ultimately unsuccessful legal action to stop construction of the project3 while youth from the reservation began a social media campaign which gradually morphed into a larger movement with dozens of associated hashtags.4 I performed network analysis on #NODAPL, the most prominent of these hashtags on Twitter, between October 22 – 30, 2016. This revealed some interesting trends in the data, including the key role of alternative media, celebrities, and seemingly random twitter users holding the network together. Another surprising finding was the relatively minor role that republican candidate Donald Trump’s twitter account plays in the #NODAPL conversation, especially compared to the accounts of Barack Obama, Hilary Clinton, Bernie Sanders, and Dr. Jill Stein.

funtitled
My Visualization of the #NODAPL network

Preliminary Network Analysis:

Due to restrictions from the Twitter API and crashes / limitations from the software (see below), I do not have complete access to all Tweet traffic involving #NODAPL.5 I used the Twitter Archiving Google Sheet (TAGS) 6.16 to capture tweets that featured #NODAPL somewhere the tweet text. The resulting sheets were then imported into a database, then exported into an edges table for use in Gephi. For technical details, see the “Detailed Procedure” section below.

Basic to any network analysis is the concept of nodes and edges. Nodes can represent people, places, things, ideas, etc – they are entities on the graph. In this case, nodes are twitter users and hashtags. Edges associate nodes in some manner; they can represent friendship, biological relationships, enmity, or anything else that links two nodes. For my analysis, edges are anytime a user includes a user name or hashtag in a tweet. For example, one of the most prominent users in this study, @RUTHHHOPKINS is represented as a node, with an edge created to the node #NODAPL every time she uses the hashtag in a tweet, like the example below:

# NODAPL itself was excluded as a node in this analysis, as every tweet and user would be directly connected to it. This network features 133,702 nodes linked by 630,393 edges.7 I used Gephi to identify communities of nodes that are strongly linked together, which are represented by different colors in the network visualization.8 In addition, I ran some basic network statistics, including measuring the degree of nodes (the number of edges between two individuals, hashtags, or individuals and hashtags) on the graph. In these measurements out-degree indicates that a node initiates a link to another node in the graph, which in this case means another user name or hashtag was mentioned in a text by the node in question. in-degree measures incoming edges, which indicates that a particular node is the subject of a twitter conversation.

I first looked at the in-degree measurement. #STANDINGROCK was by far the node with the highest in-degree, indicating its popularity as a potential alternative hashtag to #NODAPL. @POTUS, the official twitter account of the President of the United States, was in second place, followed by #WATERISLIFE, @HILLARYCLINTON, @OFFICIALJADEN, @UR_NINJA, @SHAILENEWOODLEY, @MARKRUFFALO, and @RUTHHHOPKINS. In this list, only two nodes are not politicians, hashtags, or celebrities. @UR_Ninja is the official twitter account of Unicorn Riot9, a 501(c)3 nonprofit organization based in Minneapolis, Minnesota10 which has done extensive reporting on the Dakota Access Pipeline protests. @RUTHHHOPKINS is the twitter account of Ruth Hopkins, a Dakota/Lakota Sioux writer, journalist, and blogger. The high degree count on these nodes indicates that they may function as an information service, where their reporting on the situation is retweeted and mentioned by many other nodes in the network.

This measurement also revealed a marked difference between the in-degree and out-degree of nodes. The top 34 nodes by number of degrees are so dominated by in-degree connections that no node has an out-degree that contains more than 3.17% of its total edges. This reveals that such nodes are being “talked at”: they are mentioned in tweets, retweeted in large numbers, but by and large feature extremely limited further engagement with other Twitter users.

A particular user group is indicative of this trend. Few politicians have used Twitter to actively engage with activists or to contribute to the dialogue surrounding the #NoDAPL movement. In some cases this is not surprising; the official twitter account of the President of the United States can scarcely be expected to contribute extensively to dialogue on twitter. Despite being the seventh highest degree node and an occupation of her Brooklyn campaign headquarters on October 27, 201611 @HILLARYCLINTON, the official account of Hillary Clinton, has likewise not responded to #NoDAPL conversations on twitter. The official account of Bernie Sanders, @SENSANDERS, has also not extensively engaged with #NODAPL. However, on October 31, 2016, which is outside of the bounds of my data set, his account did issue a series of tweets in support of the # NODAPL movement.12

Another account of a politician, Dr. Jill Stein (@DRJILLSTEIN), is twelfth on in-degree, but only has five outwardly directed edges. Despite active involvement at the protests leading to charges of criminal trespass and criminal mischief,13 Dr. Stein’s twitter account has barely engaged with other users, with the only mentions in this data set originating from a retweet that mentioned Hillary Clinton and Barack Obama.14 Interestingly, despite over 1,000 retweets (many of which were collected by this study), her tweet mentioning both Hillary Clinton and Donald Trump15 was not captured by the TAGS software.

Perhaps surprisingly for a major party candidate, the twitter handle of Donald Trump, @REALDONALDTRUMP, is an outlier on this list: he ranks at 112,160 with only 933 total mentions. Trump’s publicized investments and connections with the Dakota Access project16 and environmental positions, including discounting climate change,17 almost certainly makes him unlikely to be sympathetic, let alone an ally of the # NODAPL movement. Indeed, most of his mentions on the network are simply retweets of Dr. Jill Stein’s criticism against Donald Trump and Hillary Clinton’s lack of involvement in the pipeline issue.18

Drilling down further into the data, I next looked at the nodes with the highest out-degree, which represents nodes who mentioned other users and hashtags. There were some interesting variations from the trends of in-degree nodes. Three users, @DEANLEH, @CANATIVEOBT, and @WMN4SRVL had in-degree and out-degree measurements that were no more than 20% divergent from each other. However, this does not mean that these nodes are engaged in extensive online conversations. These accounts all feature extensive retweets and linkages to different causes often associated with the progressive movement, including climate change awareness, opposition to institutional racism, feminism, and anti-corporatism. All three of these accounts seem to perform a function similar to news aggregation, as the majority of their mentions are retweets from other sources and are not extensive discussions with other users.

Another useful statistics, betweenness, measures the number of shortest paths (connections between any two nodes on the graph that may involve any number of additional nodes) that pass through a specific node.19 Nodes with a high betweenness are “central” in that they play a critical role in connecting (and therefore moving information) through the network. The single node with the highest betweenness is @UR_NINJA, which combined with its high degree ranking, suggests that the news service plays a critical role in bringing together individuals on the graph who are interested in social justice / progressive issues. Four other nodes in the top 25 betweenness list are likewise in the top 25 nodes by degree.

The remaing nodes are somewhat surprising. The twitter profile for second highest betweenness node, @TNPMR has a limited online footprint outside of Twitter, and does not seem to be involved in a leadership capacity in a social movement or media organization. Another important node in this measurement, @AMAZONMILLER, only scores 1638th in total degrees, yet still retains an important place in the network structure. Looking further at this data, I next examined at each individual user’s Twitter profile who scored in the top 25 for betweenness. I divided this list into people who seem to be primarily interested in progressive causes in general vs. those who expressed affinity for indigenous rights issues. The results were nearly evenly split, with a slight edge to the more general progressivists. However, only two of top ten nodes in the betweenness category focused primarily on indigenous issues, while the rest were concerned with progressivist issues more broadly. What this may indicate is that, as a whole, indigenous activists may face future difficulties in promoting their narrative outside of the more general progressive interests of the online community.

Further Observations:

These preliminary steps have also revealed some issues about data collection and curation. Twitter’s REST and streaming APIs are woefully inadequate for examining the whole data set. While Twitter provides, in theory, a representative sample of the data set, one of the powers of social network analysis is the discovery of weak ties and other network structures which are by definition not representative of the network as a whole. This can be frustrating for academic study of the network, and extremely detrimental to movements that depend on social media to transmit their messages. Groups can look at their own twitter histories, but the larger network structure, along with crucial weak ties, may be invisible to them.

Although Twitter does provide mechanisms for obtaining the entire history of hashtag usage, the organic development of other hashtags which are not heavily watched from the beginning is almost certainly a cost-prohibitive proposition for social movements that are loosely organized, under-funded, and / or have limited computer infrastructure. It would be a significant benefit for such groups to gain access to the Twitter history of their movements, and be able to the evolution of the conversation on social media. As hashtag use can grow organically, with many different signifiers used for conversations, Twitter’s current pricing structure and data access model puts these groups at a severe disadvantage and hinders the identification and cultivation of allied communities and supporters.

A less pressing, but nevertheless important, issue is access to Twitter’s archive by researchers. Unlike print material or traditional media, which may be tedious to analyze but are fully (and for the most part cheaply) accessible to interested parties, the complete set of tweets on a topic are impossible to study without significant funding. Even if a researcher could guess all of the hashtags that could emerge from a dynamic topic, the Twitter streaming API does not provide all relevant tweets. Such limitations make it challenging to use Twitter data in a pedagogical setting. Some of my students have expressed interest in conducting similar projects, but the need for constant downstream connections and the high cost of historical tweets have made all but the most superficial studies impossible. There needs to be a more cost-effective means for projects operating on a limited budget, students, and other academic uses of Twitter’s data.

Next Steps:

In addition to the data set on #NoDAPL featured here, I have also compiled a number of hashtags and data in separate TAGS sheets which can be combined to see more of the network. I am currently running a python script to grab more tweet data from the streaming API, which should provide more tweets. After placing this data in the network and performing some basic sentiment analysis, I want to see if distinct communities have formed around different hashtags, and if those communities have noticeably different rhetorical strategies that correspond to the inclusion of certain hashtags. A long term goal is to secure funding to obtain the complete twitter archive of #NODAPL and related hashtags in order to perform a full social network and sentiment analysis. In addition, I would like to examine the twitter history of @UR_Ninja and other alternative news organizations to see if their followers form recognizable activist communities. As part of this analysis, I am especially interested to see how these communities change when news organization shifts their focus between causes (like #FERGUSON to #NODAPL), and to examine the interactions of these virtual communities with different social movements.

To overcome the issues I discovered with TAGS and TwitterStreamingImporter, I am currently running a python script (modeled after http://adilmoujahid.com/posts/2014/07/twitter-analytics/) that pulls in the full json object from Twitter’s streaming API for a number of hashtags related to #NODAPL. I think the best approach is to perform a weekly update of a “master” network that captures all of the data that I can dealing with #NODAPL, and then running statistics / etc from a filtered network in Gephi. I will be sure to post any additional developments here.

Detailed Procedure:

The first difficulty in analyzing Twitter traffic is actually obtaining Twitter data. While Twitter does retain a historical archive of all tweets, this resource is currently inaccessible for academic research unless licensing fees are paid to an archival service such as GNIP. There is an indication that GNIP is aware of the power of Twitter analytics for academic research, and there are different pricing plans available,20 but as my project is currently in the exploratory phase, I am operating without any funding. As such, I needed an alternative.

I first used TAGS to pull historical and incoming tweets into separate google sheets for each hashtag I was interested in. TAGS uses Twitter’s REST API, which limits search rates and results.21 I ran into rate limits rather quickly with my searches; in addition, my documents in google also hit their size and row limit. TAGS does not provide the entire result from the Twitter API: fields like place, retweeted (which indicates if a tweet was retweeted or not), and other useful fields are left off. Finally, I noticed that the text of tweets was often truncated; this made searching form complete user names, hashtags, and full text problematic. Although TAGS is a convent way to collect tweets, it can not possibly hope to represent the full network.

Despite these imitations, TAGS can still provide some powerful insights with a little modification. After importing my TAGS documents into a postgresql database, I mined the tweet text for all user mentions and hashtags from individual twitter users, which formed the edges of my network. I then imported this into Gephi v.0.9.122, where I performed some basic network analysis and visualizations of the data.

After this analysis, I decided that I needed to capture more tweets as they are issued. I used the TwitterStreamingImporter plugin for Gephi,23 which uses Twitter’s streaming API.24 The result is not all tweets that contain specified search terms, but is instead a representative sample that numbers up to 1% of global tweets. At ~ 300 -500 million tweets per day,25 the streaming api will return 3 – 5 million tweets on a given subject. For small data sets this may be sufficient, but it is impossible to tell how truly representative this sample is without the complete Twitter firehose.26

Unlike TAGS, TwitterStreamingImporter requires a constant internet connection to compile tweets. This is impracticable if not impossible for individuals who use a single laptop or other machine between different locations. I also experienced some crashes while performing analytics and changing/ running the visualization layouts; anyone wishing to style twitter data using this technique may wish to save constantly and export different files for styling purposes. This plugin does a nice job of drawing edges between users, tweets, and hashtags, and specifies the type of edge (tweet, retweet, hashtag, etc), although I would still like some more detailed information. The code is freely accessible,27 so I may be able to fork the repository and create a new plugin that pulls in all the data that I am interested in (especially geolocations, time of the tweet, etc). However, I think simply using a python script on a persistent connection will be my next step in this analysis. 

Notes:

1  LLC Dakota Access and United States Army Corps of Engineers, “Environmental Assessment: Dakota Access Pipeline Project, Crossings of Flowage Easements and Federal Lands” (U.S. Army Corps of Engineers, Omaha District, 2016), 8, http://purl.fdlp.gov/GPO/gpo74064.

2  Further reading can be found at https://nycstandswithstandingrock.wordpress.com/standingrocksyllabus/, created by NYC Stands for Standing Rock committee, a self described group “…group of Indigenous scholars and activists, and settler/ POC supporters.” (https://nycstandswithstandingrock.wordpress.com/about/).

3  A. B. C. News, “Court Denies Tribe’s Appeal to Block Dakota Access Pipeline,” ABC News, October 11, 2016, http://abcnews.go.com/US/court-denies-tribes-appeal-block-controversial-dakota-access/story?id=42700614.

4  “Rezpect Our Water,” accessed November 6, 2016, http://rezpectourwater.com/; “Thousands Nationwide Show Solidarity with the Standing Rock Sioux and #NoDAPL,” Sierra Club, September 13, 2016, http://www.sierraclub.org/planet/2016/09/thousands-nationwide-show-solidarity-standing-rock-sioux-and-nodapl.

5  n.b. Twitter is case insensitive, but all user names and hashtags are capitalized here.

7  I used Gephi with the OpenOrd Layout to create the network visualization1 after modifying TAGS data in a postgresql database. Although the OpenOrd layout is intended for undirected graphs (see https://marketplace.gephi.org/plugin/openord-layout/), its ability to handle large datasets and limited computing resources made it an attractive choice for this investigation.

8  The modularity for the graph is 0.414, with 862 communities detected. 32 of these communities had 100 or more nodes, and totaled 131,080 of the 133,702 total, which is 98.04% of the total.

10  “About,” Unicorn Riot, accessed October 30, 2016, http://www.unicornriot.ninja/?page_id=372.

11  The Root Staff, “#NoDAPL: Indigenous Youths Occupy Hillary Clinton’s Brooklyn, NY, Headquarters,” The Root, October 29, 2016, http://www.theroot.com/articles/news/2016/10/nodapl-indigenous-youth-occupy-hillary-clintons-brooklyn-headquarters/; “Indigenous Youth Occupy Hillary Clinton Campaign Headquarters to Demand She Take Stand on #DAPL,” Democracy Now!, accessed November 4, 2016, http://www.democracynow.org/2016/10/28/indigenous_youth_occupy_hillary_clinton_campaign.

16  Oliver Milman, “Dakota Access Pipeline Company and Donald Trump Have Close Financial Ties,” The Guardian, October 26, 2016, sec. US news, https://www.theguardian.com/us-news/2016/oct/26/donald-trump-dakota-access-pipeline-investment-energy-transfer-partners; “The Latest: Trump Holds Dakota Access Pipeline Company Stock,” US News & World Report, accessed November 4, 2016, http://www.usnews.com/news/us/articles/2016-10-26/the-latest-pipeline-protesters-think-their-removal-imminent.

17  “Did Trump Say Climate Change Was a Chinese Hoax?,” @politifact, accessed November 4, 2016, http://www.politifact.com/truth-o-meter/statements/2016/jun/03/hillary-clinton/yes-donald-trump-did-call-climate-change-chinese-h/.

19  I performed Eigenvector analysis on the data set, but there was little deviation in the top ranked nodes from ranking by total degree.

25  Jim Edwards, “Leaked Twitter API Data Shows the Number of Tweets Is in Serious Decline,” Business Insider, accessed November 2, 2016, http://www.businessinsider.com/tweets-on-twitter-is-in-serious-decline-2016-2; “Twitter Usage Statistics – Internet Live Stats,” accessed November 2, 2016, http://www.internetlivestats.com/twitter-statistics/#sources.

26  Research on the representative accuracy of Twitter’s API has been mixed; see Fred Morstatter et al., “Is the Sample Good Enough? Comparing Data from Twitter’s Streaming API with Twitter’s Firehose,” arXiv Preprint arXiv:1306.5204, 2013; Fred Morstatter, Jürgen Pfeffer, and Huan Liu, “When Is It Biased?: Assessing the Representativeness of Twitter’s Streaming API,” in Proceedings of the 23rd International Conference on World Wide Web (New York, NY: ACM, 2014).

Bibliography

“A Pipeline Fight and America’s Dark Past.” The New Yorker, September 6, 2016. http://www.newyorker.com/news/daily-comment/a-pipeline-fight-and-americas-dark-past.

“About.” NYC Stands with Standing Rock, September 13, 2016. https://nycstandswithstandingrock.wordpress.com/about/.

“About.” Unicorn Riot. Accessed October 30, 2016. http://www.unicornriot.ninja/?page_id=372.

“Appeals Court Halts Dakota Access Pipeline Work Pending Hearing.” Indianz. Accessed November 6, 2016. http://www.indianz.com/News/2016/09/16/appeals-court-halts-dakota-access-pipeli.asp.

CNN, Marlena Baldacci, Emanuella Grinberg and Holly Yan. “Dakota Access Pipeline: Police Remove Protesters.” CNN. Accessed November 6, 2016. http://www.cnn.com/2016/10/27/us/dakota-access-pipeline-protests/index.html.

Dakota Access, LLC, and United States Army Corps of Engineers. “Environmental Assessment: Dakota Access Pipeline Project, Crossings of Flowage Easements and Federal Lands.” U.S. Army Corps of Engineers, Omaha District, 2016. http://purl.fdlp.gov/GPO/gpo74064.

“Dakota Access Pipeline.” Accessed November 6, 2016. http://www.daplpipelinefacts.com/.

“Dakota Access Pipeline: Overview.” Accessed November 6, 2016. http://www.daplpipelinefacts.com/about/overview.html.

“Did Trump Say Climate Change Was a Chinese Hoax?” @politifact. Accessed November 4, 2016. http://www.politifact.com/truth-o-meter/statements/2016/jun/03/hillary-clinton/yes-donald-trump-did-call-climate-change-chinese-h/.

Edwards, Jim. “Leaked Twitter API Data Shows the Number of Tweets Is in Serious Decline.” Business Insider. Accessed November 2, 2016. http://www.businessinsider.com/tweets-on-twitter-is-in-serious-decline-2016-2.

Healy, Jack. “From 280 Tribes, a Protest on the Plains.” The New York Times, September 11, 2016. http://www.nytimes.com/interactive/2016/09/12/us/12tribes.html.

“Indigenous Youth Occupy Hillary Clinton Campaign Headquarters to Demand She Take Stand on #DAPL.” Democracy Now! Accessed November 4, 2016. http://www.democracynow.org/2016/10/28/indigenous_youth_occupy_hillary_clinton_campaign.

“Judge Rules That Construction Can Proceed On Dakota Access Pipeline.” NPR.org. Accessed November 6, 2016. http://www.npr.org/sections/thetwo-way/2016/09/09/493280504/judge-rules-that-construction-can-proceed-on-dakota-access-pipeline.

“Life in the Native American Oil Protest Camps.” BBC News, September 2, 2016, sec. US & Canada. http://www.bbc.com/news/world-us-canada-37249617.

McCausland, Phil. “More Than 80 Dakota Pipeline Protesters Arrested, Some Pepper Sprayed.” NBC News, October 23, 2016. http://www.nbcnews.com/news/us-news/more-80-dakota-access-pipeline-protesters-arrested-some-pepper-sprayed-n671281.

McCleary, Mike. “As Standing Rock Protesters Face Down Armored Trucks, the World Watches on Facebook.” WIRED. Accessed October 30, 2016. https://www.wired.com/2016/10/standing-rock-protesters-face-police-world-watches-facebook/.

Milman, Oliver. “Dakota Access Pipeline Company and Donald Trump Have Close Financial Ties.” The Guardian, October 26, 2016, sec. US news. https://www.theguardian.com/us-news/2016/oct/26/donald-trump-dakota-access-pipeline-investment-energy-transfer-partners.

Morstatter, Fred, Jürgen Pfeffer, and Huan Liu. “When Is It Biased?: Assessing the Representativeness of Twitter’s Streaming API.” In Proceedings of the 23rd International Conference on World Wide Web. New York, NY: ACM, 2014.

Morstatter, Fred, Jürgen Pfeffer, Huan Liu, and Kathleen M Carley. “Is the Sample Good Enough? Comparing Data from Twitter’s Streaming API with Twitter’s Firehose.” arXiv Preprint arXiv:1306.5204, 2013.

News, A. B. C. “Court Denies Tribe’s Appeal to Block Dakota Access Pipeline.” ABC News, October 11, 2016. http://abcnews.go.com/US/court-denies-tribes-appeal-block-controversial-dakota-access/story?id=42700614.

———. “Timeline of the Dakota Access Pipeline Protests.” ABC News, October 31, 2016. http://abcnews.go.com/US/timeline-dakota-access-pipeline-protests/story?id=43131355.

“Rezpect Our Water.” Accessed November 6, 2016. http://rezpectourwater.com/.

Staff, The Root. “#NoDAPL: Indigenous Youths Occupy Hillary Clinton’s Brooklyn, NY, Headquarters.” The Root, October 29, 2016. http://www.theroot.com/articles/news/2016/10/nodapl-indigenous-youth-occupy-hillary-clintons-brooklyn-headquarters/.

“The Digital Transition: How the Presidential Transition Works in the Social Media Age.” Whitehouse.gov, October 31, 2016. https://www.whitehouse.gov/blog/2016/10/31/digital-transition-how-presidential-transition-works-social-media-age.

“The Latest: Trump Holds Dakota Access Pipeline Company Stock.” US News & World Report. Accessed November 4, 2016. http://www.usnews.com/news/us/articles/2016-10-26/the-latest-pipeline-protesters-think-their-removal-imminent.

“Thousands Nationwide Show Solidarity with the Standing Rock Sioux and #NoDAPL.” Sierra Club, September 13, 2016. http://www.sierraclub.org/planet/2016/09/thousands-nationwide-show-solidarity-standing-rock-sioux-and-nodapl.

“Twitter Usage Statistics – Internet Live Stats.” Accessed November 2, 2016. http://www.internetlivestats.com/twitter-statistics/#sources.

Williams, Weston. “Standing Rock Protests Escalate, as Tribe Calls for DOJ to Investigate.” Christian Science Monitor, October 24, 2016. http://www.csmonitor.com/USA/Justice/2016/1024/Standing-Rock-protests-escalate-as-tribe-calls-for-DOJ-to-investigate.

 

Gephi, Conspiracies, and SNA in the Classroom: Midterm Thoughts

Image from https://gephi.org/images/screenshots/layout2.png

This semester I designed a class, Introduction to Social Networks and Conspiracy Theories, that makes extensive use of Gephi along with the downloadable version of Networks, Crowds, and Markets: Reasoning About a Highly Connected World by David Easley and Jon Kleinberg. Readings on real conspiracies and conspiracy theories were compiled by myself into a course packet, and cover Ancient Athens, the assassination of Philip II, Knights Templar, the Gunpowder plot, the French Revolution, and conspiracy theories in the United States from the revolution to the present. In short, this class uses social network analysis to study paranoia from Plato to NATO.

Key to this class is the understanding and use of SNA software. I chose Gephi as it has a forgiving learning curve for creating networks and conducting basic analysis, and its cross platform capabilities were required as I do not have access to a computer lab for the class. The other deciding factor was the ease of exporting Gephi files (through the use of the excellent Sigmajs exporter) to the web, as the students will produce a number of publicly available network visualizations, in addition to a written report, for their final project.

Gephi has shown its usefulness to the class. The ability to very quickly take .csv files and make meaningful network diagrams impressed the students, and showing network filtering in real-time is a powerful way to show conceptually how eliminating bridges and key nodes can throw a network into confusion. Some other positive points:

Gephi’s GUI vs. command line tools
For my students, using a GUI has been a far better choice than a command-line or text driven interface. While the Gephi GUI can sometimes do strange things (like eliminate buttons or workspaces), on the whole its basic functionality is relatively intuitive. After a few demonstrations in the basics, the students have grasped how to create a network from spreadsheet data.

Real-time rendering
Keeping the various layouts running while filtering / changing elements of the network (especially the stand-by force atlas) powerfully illustrates many network concepts. It is also a very cheap (in time and effort!) method to create animated networks for the class.

Ease of stats
While some of the statistics I would like to see are not in the core of Gephi, the ones that are present are excellent. The students, after learning about the math and logic behind various network statistics, were quite relieved to discover how quickly Gephi can compute centrality, density, and degree measurements.

Styling
After spending some time going over the interface, the ease of selecting different attributes and measurements for node styling is something that really captured the student’s attention. I anticipate a flood of very interesting network diagrams for their final projects based on different styling / visualization choices, which is an excellent way for students to support their arguments.

Creating diagrams for the course
Using Gephi to create network diagrams for the conspiracy portion of the course is a very straightforward process, and the excellent export capabilities ensure that all of the networks I share look very professional.

While Gephi is an excellent piece of software, extensive use in the classroom has revealed some issues and missing features that do present a source of frustration for the class.

Java can be difficult
Supporting multiple operating systems with different Java installs on student laptops is an exercise in frustration. A class that uses Gephi extensively MUST have a supported computer lab, at the very least so that Java problems can be addressed and fixed for everyone at the same time in the same way. I am running my course without this, and I can attest that much class time has been wasted trying to troubleshoot Java and install issues on different OS / JVM combinations.

Gephi is not very fault tolerant 
Data, at least in the humanities, is often messy, malformed, and non standards-compliant. I was stymied in class due to one character causing an issue in a data set that we found online – while text programs and Excel / OpenOffice handled the file gracefully, it blew up on Gephi.

Many of the concepts discussed in SNA texts can not be easily seen in Gephi
Concepts like triadic closure are somewhat difficult to capture, but there is nothing in Gephi to identify triads. It can compute the total number, but this is less useful for showing students where the triads are in a graph. I could also not find a way to view cliques, or to identify bridges programatically. Network balance is also something that is not readily apparent in Gephi.

Filtering can be difficult
While there are some powerful filtering features in Gephi, the class has had a difficult time conceptualizing their use and using them to their full potential. A more intuitive interface may solve some of these problems.

Some features are Broken
Embeddedness is not a core feature for Gephi, and the plugin that computes this is incompatible with the current version of the code. In addition, filtering on partition for edges does not seem to currently work – this makes identification of cliques and balanced graphs more difficult. Along with this, Gephi can be very unstable at times, and some workarounds (like exporting a newly created graph and re-importing it to ensure compatibility with multiple edges) can be a hassle.

Summary
In short, I think Gephi is a good choice for the classroom, but one that will require some serious work from the instructor. I would HIGHLY recommend that you teach Gephi in a classroom setting, where JVM and OS choices are restricted and supported by IT staff. I would like to see more educators using Gephi so we can pressure the developers (or encourage interested students!) to add more functionality to the core of the software.

Reflections on the BAM Conference

BAM2016_FinalI had the absolute privilege of attending the Big Ancient Mediterranean Conference (#BAM2016) this week. The remarkable projects, enthusiasm for all things digital, and congenial atmosphere was inspiring. Now that the conference has ended, I think it is a good time to organize my thoughts, and perhaps point out some of the common themes that particularly struck me.

1) Our projects are ready talk to each other. This was one of the most exciting revelations of the conference. Many of the digital humanities projects and initiatives represented here not only offer their data in downloadable format (.csv files, JSON dumps, etc), but also feature feature-rich APIs. Even if we are not quite yet to the point where we are using the same meta-data / data standards (more on that later), the use of APIs with permanent URIs allows our data sets to meaningfully interact. The work of Pelagios creates an excellent medium to facilitate such communication, and opens up our data to initiatives that are not limited to studies of the ancient world.

2) Users, users, users, users. We had some spirited and fascinating debate about who the audience is for digital humanities projects, and if it is even possible to create an application that can be effectively used by different audiences (experts, the general public, grad students, etc). I fall squarely on the side of the idea that we engage with multiple audiences by the very nature of a freely-accessible online platform, but our debate revealed a fundamental design question that is often not explicitly addressed: Exactly *who* is a digital humanities project for? Although I may differ with the voices questioning the multi-audience approach, I certainly agree with the position that we need increased usability studies and more robust user information. It is not enough for us to create DH projects that answer our individual questions for ourselves – we need to understand how to communicate with an audience which is used to the visual literacies of web and is less familiar with the conventions of scholarly communication derived from a print medium. The sample edition of Calpurnius from the Digital Latin Library (http://digitallatin.github.io/viewer/editio-2.0.html) is a great model – it captures the information of a textual apparatus free of technical jargon, rendering critical information to a wider audience without a loss of scholarly rigor.

3) Uncertainty. A corollary to the discussion around users is the question of representing uncertainty. There was an interesting question of why we should recognize fuzzy data at all: if an application is directed solely at an academic audience, is it not correct to assume that our users implicitly know that any data or representation of the ancient world is somewhat problematic, and therefore have no problem consuming visual representations that ignore the idea of uncertainty entirely? As I think that our projects need to communicate with non-academic audiences (and indeed academics who may not be as familiar with the inherent uncertainty of the ancient world), I see a very real need to represent the imprecision and uncertainty of our data. Almost all of the projects at BAM grappled with fuzzy data, whether that was geo-spatial (location, assignment to a place), textual (uncertain letter forms, unclear manuscript tradition), or interpretive (multiple archaeological reconstructions, the placement of garrison soldiers at a specific community). Almost every project dealt with uncertainty in a way that reflected the scholarly tradition of their subject area, like placing notes in an apparatus, or describing fuzzy data through text. I see a critical need to establish a common meta-data vocabulary that can, at the very least, alert users (both human and computational) to the presence of uncertainty in our work. I also see room for a common visual literacy for representing uncertainty in maps, social networks, or other visualizations, which is a far more complex issue.

4) Metadata and documentation. For example, even if it proves impossible / impractical / or undesirable to create a common visual literacy surrounding uncertainty, we need to implement a common way of indicating and describing fuzzy data that can be computationally consumed. This returns to my first point: our projects can now talk to each other through computational agents, but we must agree on the vocabulary governing that conversation. Alignment with Pelagios will help in that regard, but I think more attention needs to be paid at all levels of DH projects to metadata standards. For DH projects in the ancient world, the ontology for Linked Ancient World Data offered by LAWD (https://github.com/lawdi/LAWD) should be a staring point.

Much like the slow, often tedious process of generating metadata, creating documentation for DH projects is often overlooked. From comments in code to capturing the design decisions and the entire creative process, DH documentation needs to go beyond the narrative of the research question and capture the entire creative, intellectual, and industrial process of a DH project. The suggestion to look to the hard sciences foe guidance in this process is a fruitful place to start.

5) The use of open-source repositories and the continued importance of institutional support. Most of the projects at BAM had a presence at GitHub, and there was some very interesting discussion around the practicality and usefulness of a non-profit, academically oriented alternative. This debate had as a background the reality that GitHub and other free services are currently a critical component of our work, as many DH projects operate on a shoe-string budget and are dependent on largess from an institution or grants. Such funding is often uncertain; Pleaides, one of the most exemplary projects at BAM, has a 50% success rate at securing NEH funding. For smaller projects this rate may be even lower; some participants indicated that the reward to work ratio of grant applications is not attractive for smaller projects.

There is some good news though, as many institutions have expressed growing interest in the digital humanities as a field. As a digital humanities community we need to build on this interest with a push for institutional backing. The University of Iowa clearly demonstrated the excellent outcomes of a group that is both dedicated to digital humanities and able to provide hosting, archiving, and other technical support.

6) The continued need for face-to-face gatherings. While we have many electronic forums for communication (Twitter, Slack, IRC, site forums, etc), there is still something special that happens when DH scholars are brought together for several days, freed of other distractions, and think about the same issues as a group. For me, the headspace of a conference is entirely different than using Skype in my office; my other projects and papers are out of sight, (largely) out of mind, and my focus is squarely on the discussion.

7) Release the tweets. One place where documentation is somewhat overlooked is at conferences like BAM. Many conferences generate an end-product like proceedings, which while valuable, can not capture the conversation that surrounds each presentation. The incredible use of twitter by BAM attendees, and the use of storify to capture those tweets, can serve as a model for other conference proceedings. Conference organizers should establish an “official” twitter tag, advertise it widely on social media, and ensure that a conference venue offers free wifi-access to the attendees. This expands out the reach of the conference in real-time to attendees and remote presenters who would otherwise be unable to participate in the conversation. A critical component to this is also archiving – for BAM, the use of storify (part 1, part 2) and the support of Iowa libraries ensures that there is a searchable account that can be referenced of the conference and the wider conversation it sparked.

The BAM conference generated a lot of intriguing conversation and displayed a host of excellent projects. If this kind of interest, scholarship, and congeniality can be maintained, the future of DH is bright indeed.

2 of N: Gephi, D3.js, and maps: Success!

gephileafletd3js
A working, geographically accurate map using Gephi, D3.js, and Leaflet. NOTE: Link subject to change.

In my previous post I outlined how I used D3.js to display a “raw” JSON output from Gephi. After some hacking around, I am now able to display my Gephi data on an interactive leaflet map!

This is a departure from other work on the subject for a few reasons:

  1. Not all of my data has geographic information – indeed in many cases a specific longitude / latitude combination is inappropriate and would lend a false sense of permanence to anyone looking at the map. In my case I have names of Greek garrison commanders which have some relation to a place, but it is unclear in some instances if they are actually at a specific place, have dominion over the location, or are mentioned in an inscription for some other reason. Therefore, I need to locate data that has a fuzzy relation to a location (ancient people who may originate, reside, work, and be mentioned in different and / or unknown locations) and locations that may themselves have fuzzy or unknown geography. This is a problem for just about every ancient to pre-modern project, as we do not have a wealth of location information, or even a clear idea of where some people are at any particular moment.
  2. I want to show how social networks form around specific geographic points which are known, and have those social networks remain “reactive” on zooms, changing map states, etc. This can be expanded to encompass epistolary networks, knowledge maps, etc – basically anything that links people together who may not be locatable themselves.
  3. Gephi does not output in GeoJSON, and the remaining export options that are geographically oriented require that *all* nodes have geographic information. As this is not my case (see above), the standard export options will not work for me. Also, as part of my work on BAM, I want to create a framework that is as “plug and play” as possible, so that we can simply take Gephi files and drop them into the system to make new modules. Therefore this work has to be reproducible with a minimum of tweaking.

So, let us get to the code!

First things first: You need to make your html, bring in your javascript,and style some elements. I put the css in the file for testing – it will be split off later.


<!DOCTYPE html>

<head>
<meta name='viewport' content='width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no' />
<!-- Mapbox includes below -->
<script src='https://api.mapbox.com/mapbox.js/v2.2.2/mapbox.js'></script>
<link href='https://api.mapbox.com/mapbox.js/v2.2.2/mapbox.css' rel='stylesheet' />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="http://d3js.org/d3.v3.js"></script>
</head>
<meta charset="utf-8">
<!-- Will split off css when done with testing -->

<style>
.node circle {
stroke: grey;
stroke-width: 10px;
}

.link {
stroke: black;
stroke-width: 1px;
opacity: .2;
}

.label {
font-family: Arial;
font-size: 12px;
}

#map {
height: 98vh;
}

#attributepane {
display: block;
display: none;
position: absolute;
height: auto;
bottom: 20%;
top: 20%;
right: 0;
width: 240px;
background-color: #fff;
margin: 0;
background-color: rgba(255, 255, 255, 0.8);
border-left: 1px solid #ccc;
padding: 18px 18px 18px 18px;
z-index: 8998;
overflow: scroll;
}
</style>

<body>

<div id='attributepane'></div>

<div id='map'>
</div>

Next, make a map.

<script>
var map = L.mapbox.map('map', 'yourmap', {
accessToken: 'yourtoken'
});

//set the initial view. This is pretty standard for most of the ancient med. projects
map.setView([40.58058, 36.29883], 4);

Pretty basic so far. Next we follow some of the examples that are already in the wild to initiate D3 goodness:


var force = d3.layout.force()
.charge(-120)
.linkDistance(30);

/* Initialize the SVG layer */
map._initPathRoot();

/* We simply pick up the SVG from the map object */
var svg = d3.select("#map").select("svg"),
g = svg.append("g");

Next, we bring in our json file from Gephi. Again, this is pretty standard:


d3.json("graph.json", function(error, json) {

if (error) throw error;

Now we get into the actual modifications to make the json, D3, and leaflet all talk to each other. The first thing to do is to modify the colors (from http://stackoverflow.com/questions/13070054/convert-rgb-strings-to-hex-in-javascript) so that D3 displays what we have in Gephi:


//fix up the data so it is what we want for d3
json.nodes.forEach(function(d) {
//convert the rgb colors to hex for d3
var a = d.color.split("(")[1].split(")")[0];
a = a.split(",");

var b = a.map(function(x) { //For each array element
x = parseInt(x).toString(16); //Convert to a base16 string
return (x.length == 1) ? "0" + x : x; //Add zero if we get only one character
})
b = "#" + b.join("");
d.color = b;

Next, we need to put in “dummy” coordinates for locations that do not have geography. This is messy and could probably be removed with some more efficient coding later. For the nodes that do have geography, the map.latLngToLayerPoint will translate the values into map units, which places them where they need to go. These are simply lat lon attributes in the Gephi file. I also set nodes that are fixed / not fixed, based on the presence of lat/lon data.


if (!("lng" in d.attributes) == true) {
//if there is no geography, then allow the node to float around
d.LatLng = new L.LatLng(0, 0);
d.fixed = false;
} else //there is geography, so place the node where it goes
{
d.LatLng = new L.LatLng(d.attributes.lat, d.attributes.lng);
d.fixed = true;
d.x = map.latLngToLayerPoint(d.LatLng).x;
d.y = map.latLngToLayerPoint(d.LatLng).y;
}
})

Now to setup the links. As we are keyed on attributes and not an index value, we need to follow this fix:


var edges = [];
json.edges.forEach(function(e) {
var sourceNode = json.nodes.filter(function(n) {
return n.id === e.source;
})[0],
targetNode = json.nodes.filter(function(n) {
return n.id === e.target;
})[0];

edges.push({
source: sourceNode,
target: targetNode,
value: e.Value
});
});

var link = svg.selectAll(".link")
.data(edges)
.enter().append("line")
.attr("class", "link");

Now to setup the nodes. I wanted to do a popup on a mouseclick event, but for some reason this is not firing (mousedown and mouseover do work, however). The following code builds the nodes, with radii, fill, and other information pulled from the JSON file. It also toggles a div that is populated with attribute information from the JSON. There is still some work to do at this part: the .css needs to be cleaned up, images need to be resized, and the attribute information for the nodes should be a configurable option when importing the JSON.


var node = svg.selectAll(".node")
.data(json.nodes)
.enter().append("circle")
//display nodes and information when a node is clicked on
//for some reason the click event is not registering, but mousedown and mouseover are.
.on("mouseover", function(d) {

//put in blank values if there are no attributes
var titleForBox, imageForBox, descriptionForBox = '';
titleForBox = '
<h1>' + d.label + '</h1>

';

if (typeof d.attributes.Description != "undefined") {
descriptionForBox = d.attributes.Description;
} else {
descriptionForBox = '';
}

if (typeof d.attributes.image != "undefined") {
imageForBox = '<img src="' + d.attributes.image + '" align="left">';
} else {
imageForBox = '';
}

var htmlForBox = imageForBox + ' ' + titleForBox + descriptionForBox;
document.getElementById("attributepane").innerHTML = htmlForBox;
toggle_visibility('attributepane');
})
.style("stroke", "black")
.style("opacity", .6)
.attr("r", function(d) {
return d.size * 2;
})
.style("fill", function(d) {
return d.color;
})
.call(force.drag);

Now for the transformations when the map state changes. The idea is to keep the fixed nodes in the correct place, but to redraw the “floating” nodes when the map is zoomed in and out. The nodes that need to be transformed are dealt with first, then the links are rebuilt with the new (or fixed) x / y data.


//for when the map changes viewpoint
map.on("viewreset", update);
update();

function update() {

node.attr("transform",
function(d) {
if (d.fixed == true) {
d.x = map.latLngToLayerPoint(d.LatLng).x;
d.y = map.latLngToLayerPoint(d.LatLng).y;
return "translate(" +
map.latLngToLayerPoint(d.LatLng).x + "," +
map.latLngToLayerPoint(d.LatLng).y + ")";
}
}
);

link.attr("x1", function(d) {
return d.source.x;
})
.attr("y1", function(d) {
return d.source.y;
})
.attr("x2", function(d) {
return d.target.x;
})
.attr("y2", function(d) {
return d.target.y;
});

node.attr("cx", function(d) {
if (d.fixed == false) {
return d.x;
}
})
.attr("cy", function(d) {
if (d.fixed == false) {
return d.y;
}
})

//this kickstarts the simulation, so the nodes will realign to a zoomed state
force.start();
}

Next, time to start the simulation for the first time and close out the d3 json block:


force
.links(edges)
.nodes(json.nodes)
.start();
force.on("tick", update);

}); //end

Finally, time to put a function in to toggle the visibility of the div (from here) and close out our file:


function toggle_visibility(id) {
var e = document.getElementById(id);
if (e.style.display == 'block')
e.style.display = 'none';
else
e.style.display = 'block';
}
</script>
</body>

There you have it- a nice, interactive map with a mix of geographic information and social networks. While I am pleased with the result, there are still some things to fix / address:

  1. The click even not working. This is a real puzzler.
  2. Tweaking the distances of the simulation – I do not want nodes to be placed half a world away from their connections. This may have to be map zoom level dependent.
  3. Style the links according to Gephi and provide popups where applicable. This should be easy enough to do, but simply hasn’t been done in this code.
  4. Tweak the visibility of the connections and nodes. While retaining an option to show the entire network at once, my idea is to have a map that starts out with JUST the locations, and then makes the nodes that are connected to that location visible when you click on it (which would also apply to the unlocated nodes – i.e. you see what they are connected to when you click on them).
  5. Connected to the above point, the implementation of a slider to show nodes in a particular timeframe. As my data spans a period from the 600s BCE to the 200s CE, this would provide a better snapshot of a particular network at a particular time.
  6. Implement a URI based system – you will be able to go to address/someEntityName and that entity will be selected with its information pane and connected neighbors displayed. This will result in an RDF file that will be sent to the Pelagios Project.
  7. Fix up the .css for the information pane.

I will detail further steps in a later post.