Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 334/1038 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • jetty crash trouble shooting

    - by user886356
    Recently I switch to amazon ec2 + jetty9 + oracle jdk7_u45 for cost saving. I found the jetty server is very unstable. It crash randomly without any jvm dump file. Tried to enable stdout with the dumpBeforeStop=TRUE. It won't append the dump messages to stderrout.log before crash. Seems it isn't related to OutOfMemoryError as I have enabled the gc verbose options and found it still has many available memory before crash. : 162604K-3340K(176960K), 0.2240040 secs] 248332K-89101K(373568K), 0.2736860 secs] [Times: user=0.01 sys=0.01, real=0.28 secs] Tried to downgrade to jetty8 with different jdk combination (jdk6 / jdk7). Still got the same problem. Tried to remove all jvm options and using "sudo java -jar start.jar" to run jetty. Still crash. Any other way to shoot the problem?

    Read the article

  • Generating Thermal Printer (Zebra Printer) Sized PDFs for FedEx Labels

    - by Michael Hart
    Background I own a company which does a lot of FedEx Ground shipping. We have a 3rd party fulfillment center, which stores some of our inventory and at our request ships it. Zebra/Thermal printers are the most cost effective shipping label printers available and our 3rd party fulfillment center has one. I want to generate the labels locally then e-mail the 3rd party fulfillment center a PDF of the labels which they can then print out on their printer. Problem The trouble is, I can't seem to figure out how to print these 4" x 6" labels to a PDF, as FedEx (both ship manager and fedex.com) uses javascript to detect what printer I have. Question What's a clever way to send my 3rd party fulfillment center a PDF (or equivalent) of our 4" x 6" zebra thermal printer labels so they can print them out without re-entering the data?

    Read the article

  • Is there any reason to subnet a home network?

    - by Will
    For networks I understand we set a netmask on each computer to let it know what IPs it can talk to without going through the router - IPs on the same subnet can talk directly to each other and do not have to go through a router/switch. However, in today's home networks (and I suspect corporate networks as well) every computer is connected to a router/switch (at the low cost of today's hardware I doubt there it much of a market for wired repeaters/hubs). This seems to obviate the need for a subnet mask and subnetting. Considering that in most modern home architectures every computer goes through the router, even to talk to computers on the same network, is there any reason for me to subnet a home network?

    Read the article

  • Character encoding issues when generating MD5 hash cross-platform

    - by rogueprocess
    This is a general question about character encoding when using MD5 libraries in various languages. My concern is: suppose I generate an MD5 hash using a native Python string object, like this: message = "hello world" m = md5() m.update(message) Then I take a hex version of that MD5 hash using: m.hexdigest() and send the message & MD5 hash via a network, let's say, a JMS message or a HTTP request. Now I get this message in a Java program in the form of a native Java string, along with the checksum. Then I generate an MD5 hash using Java, like this (using the Commons Codec library): String md5 = org.apache.commons.codec.digest.DigestUtils.DigestUtils.md5Hex(s) My feeling is that this is wrong because I have not specified character encodng at either end. So the original hash will be based on the bytes of the Python version of the string; the Java one will be based on the bytes of the Java version of the string , these two byte sequences will often not be the same - is that right? So really I need to specify "UTF-8" or whatever at both ends right? (I am actually getting an intermittent error in my code where the MD5 checksum fails, and I suspect this is the reason - but because it's intermittent, it's difficult to say if changing this fixes it or not. ) Thank you!

    Read the article

  • Suggestion for Colocation?

    - by John M.
    Anyone has a suggestion for a good colocation company? I just don't know much about who is the best colocation provider. I have been using EC2 and S3 and it is becoming really expensive due to the bandwidth cost. Especially, people downloding contents from S3. I have looked at Peer1 and other few companies. I am not sure why they don't list their price on their website for their colocation service. They force you to chat with them and waste your time.

    Read the article

  • Suggestion for Colocation?

    - by John M.
    Anyone has a suggestion for a good colocation company? I just don't know much about who is the best colocation provider. I have been using EC2 and S3 and it is becoming really expensive due to the bandwidth cost. Especially, people downloding contents from S3. I have looked at Peer1 and other few companies. I am not sure why they don't list their price on their website for their colocation service. They force you to chat with them and waste your time.

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is - e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • What can I do to prevent my user folder from being tampered with by malicious software?

    - by Tom Wijsman
    Let's assume some things: Back-ups do run every X minutes, yet the things I save should be permanent. There's a firewall and virus scanner in place, yet there happens to be a zero day attack on me. I am using Windows. (Although feel free to append Linux / OS X parts to your answer) Here is the problem Any software can change anything inside my user folder. Tampering with the files could cost me my life, whether it's accessing / modifying or wiping them. So, what I want to ask is: Is there a permission-based way to disallow programs from accessing my files in any way by default? Extending on the previous question, can I ensure certain programs can only access certain folders? Are there other less obtrusive ways than using Comodo? Or can I make Comodo less obtrusive? For example, the solution should be proof against (DO NOT RUN): del /F /S /Q %USERPROFILE%

    Read the article

  • vmware server end of life, where to go now?

    - by matnagel
    We have some virtual machines on vmware server 2.x running on 64 bit hardware and quite happy with it. As vmware server will no longer be offered we are thinking to migrate to ESXi, which seems is free. We will have to install the specialized network cards but that's a minor problem. But once left alone with a quite silently discontinued product there is some resistance to vmware. VirtualBox seems to work: http://blogs.oracle.com/virtualization/2010/06/migrating_from_vmware_to_virtu.html What other free (of licencing cost) options are there? We have windows server 2003 32 bit VMs and also linux 32 and 64 bit VMs to migrate. So xen does not seem an option, which does not run microsoft OSes.

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Media Temple-like hosting services?

    - by antonpug
    I have a couple of wordpress sites which do not get much traffic now, but I plan on expanding to something like a 1000-2000visits/day in a year or two. Media Temple has some really nice offerings, but their Wordpress plan is 20/month...which is a little too much, seeing as at this point my site is more of a hobby than a money making machine. I currently host with HostGator (just switched from GodaddyiPageBluehost). All these cheaper/pop hosting services are okay, but it would be nice to find something a little bit more "premium", but at a lower cost than MT. Anyone know anything worth looking at?

    Read the article

  • Why is business-class storage so expensive?

    - by Mark Henderson
    This is a Canonical Question about the Cost of Enterprise Storage. See also the following question: What's the best way to explain storage issues to developers and other users Regarding general questions like: Why do I have to pay 50 bucks a month per extra gigabyte of storage? Our file server is always running out of space, why doesn't our sysadmin just throw an extra 1TB drive in there? Why is SAN equipment so expensive? The Answers here will attempt to provide a better understanding of how enterprise-level storage works and what influences the price. If you can expand on the Question or provide insight as to the Answer, please post.

    Read the article

  • Advice for Windows XP Scripting, WSH versus PowerShell

    - by Greg Graham
    After much experience scripting in the Unix/Linux open-source world, using languages such as Bourne Shell, Perl, Python, and Ruby, I now find myself needing to do some Windows XP admin scripting. It appears that the legacy environment is Windows Script Host (WSH), which can use various scripting languages, but the primary language is VBScript, and is based on COM objects. However, the future appears to be Windows PowerShell, which is based on .NET. I haven't done Basic since Applesoft in the 70s, so I'm not keen on learning VBScript, although I did learn enough to write a small script to mount network drives. If I'm going to spend time to really learn this, I'm leaning towards investing my time in the .NET PowerShell environment, if it truly is the future. I did some C# Windows Forms programming a couple of years ago, so I have some exposure to .NET, which also makes PowerShell attractive. Understanding that no one has a crystal ball to predict the future of Microsoft, I would like hear from anyone who is a PowerShell user and thinks it's worthwhile, or if there is anyone that knows of serious drawbacks to PowerShell, and recommends that I stay away from it. Update: I ended up using WSH/VBScript for a particular script that I am installing as a startup script on user's Windows XP workstations. All I have to do is copy it to their Startup folder, and I'm done. However, I only learned enough WSH to accomplish this one job. I am glad to see that PowerShell is the future, and when I have more complicated scripting tasks, I'll to turn PowerShell.

    Read the article

  • Displaying ads over WiFi hotspot

    - by Ahsan
    I have recently distributed my WiFi network with highpseed antennas to my area which covers almost 300-400 peoples. I am not charging them anything but i would like to generate some revenue through Advertisements placed on the websites that they visit. Is it possible to display ads from Google (I know i can do redirect the Advertisements, using some cache server or firewall) . Its just like a free vpn but i would like to have my advertisements above the websites they visit so i can take out the cost for the WiFi that i offer. Any suggestions would be great!

    Read the article

  • ASP:NET :Problem in DoNut Caching

    - by Shyju
    I have an ASP.NET page where i am trying to do some output caching.But ran into a problem. My ASPX page has <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="MYProject._Default" %> <%@ OutputCache Duration="600" VaryByParam="None" %> <%@ Register TagPrefix="MYProjectUC" TagName="PageHeader" Src="~/Lib/UserControls/PageHeader.ascx" %> <%@ Register TagPrefix="MYProjectUC" TagName="PageFooter" Src="~/Lib/UserControls/PageFooter.ascx" %> and i have used the User control called "PageHeader" in the aspx page. In PageHeader.ascx, i have an asp.net substitution control, where i want to show some links based on the logged in user. <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="PageHeader.ascx.cs" Inherits="MyProject.Lib.UserControls.PageHeader1" %> <div class="headerRow"> <div class="headerLogo"> <a href="Default.aspx"><img src="Lib/Images/header.gif" alt=""></a> </div> <div id="divHeaderMenu" runat="server"> <asp:Substitution ID="subLinks" runat="server" MethodName="GetUserProfileHeaderLinks" /> </div> </div><!--headerRow--> In my ascx.cs file,i have a static method which will return a string based on whether the used logged in or not using session public static string GetUserProfileHeaderLinks(HttpContext context) { string strHeaderLinks = string.Empty; // check session and return string return strHeaderLinks; } But Still the page shows the same content for both logged in user and Guest user. My objective is to to have the Page being cached except the content inside the substitution control. Any idea how to achieve this ? Thanks in advance

    Read the article

  • How to make dynamically generated .net service client read configuration from another location than

    - by Bryan
    Hi, I've currently written code to use the ServiceContractGenerator to generate web service client code based on a wsdl, and then compile it into an assembly in memory using the code dom. I'm then using reflection to set up the binding, endpoint, service values/types, and then ultimately invoke the web service method based on xml configuration that can be altered at run time. This all currently works fine. However, the problem I'm currently running into, is that I'm hitting several exotic web services that require lots of custom binding/security settings. This is forcing me to add more and more configuration into my custom xml configurations, as well as the corresponding updates to my code to interpret and set those binding/security settings in code. Ultimately, this makes adding these 'exotic' services slower, and I can see myself eventually reimplementing the 'system.serviceModel' section of the web or app.config file, which is never a good thing. My question is, and this is where my lack of experience .net and C# shows, is there a way to define the configuration normally found in the web.config or app.config 'system.serviceModel' section somewhere else, and at run time supply this to configuration to the web service client? Is there a way to attach an app.config directly to an assembly as a resource or any other way to supply this configuration to the client? Basically, I'd like attach an app.config only containing a 'system.serviceModel' to the assembly containing a web service client so that it can use its configuration. This way I wouldn't need to handle every configuration under the sun, I could let .net do it for me. Fyi, it's not an option for me to put the configuration for every service in the app.config for the running application. Any help would be greatly appreciated. Thanks in advance! Bryan

    Read the article

  • Intel Core i7-4960HQ vs. 4850HQ (Haswell) [on hold]

    - by Timothy R. Butler
    I'm looking at the new MacBook Pros and trying to decide between the Core i7-4960HQ (2.6 GHz) and i7-4850 (2.3 GHz). I've found some synthetic benchmarks comparing them, but I haven't found a lot of data, so I'd appreciate any pointers to good comparisons for the Haswell family (especially these two processors). My cursory analysis seems to suggest there isn't a huge gain from the extra 300 MHz. I'd like to determine not only whether this is generally true, but also to figure out if the gains that are made in performance come at too high of cost. Is the 2.6 going to be pushing the limits of what can fit in a thin laptop without overheating? I've looked at some of Intel's documentation, but have not been able to determine what the normal and maximum operating temperature differences are for the models. In the past, there have been times that Intel's fastest models in a given range ran especially hot and/or consumed significantly more power compared to slightly slower models. Do those concerns factor into the current generation?

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • best practice? Consumer data in MySQL on Amazon EBS (Elastic block store)

    - by jeff7091
    This is a consumer app, so I will care about storage costs - I don't want to have 5x copies of data lying about. The app shards very well, so I can use MySQL and not have scaling issues. Amazon EBS has a nice baseline+snapshot backup capability that uses S3. This should have a light footprint (in terms of storage cost). BUT: the magnolia.com story scares the crap out of me: basically flawless block-level backup of a corrupt DB or filesystem. Is there anything that is nearly as storage efficient as EBS at the MySQL level?

    Read the article

  • Windows 2008 server hosting in Europe

    - by Lasse P
    Hi, I'm searching for a Windows 2008 server in Europe, preferably in Germany or UK or anything with good routing to Denmark (as its where the primary traffic will be generated from). The server will be used as web server (asp.net mvc, php), mail server and database server. We are running a few sites with around 200 concurrent users, which isn't much, but we intend to expand in the near future and the server should be easy to scale in form of adding more RAM and HDD space - if its possible. I think a virtual server may be the best choice - hyper-v or virtuozzo? - considering cost vs specs - but i'm open to suggestions. The max budget is in the range of $1000-1200/year. You guys have any suggestions? Let me know if you need further info.

    Read the article

  • Data view web part throwing error

    - by Ashutosh Singh
    Hi I'm using a xslt based dataview webpart the steps i have taken to create a data view webpart is that 1. added a list view webpart on the page 2. Modified the toolbar property to show fulll toolbar 3. open the web page containing above list view webpart in sharepoint desginer and converted it to xslt based webpart (to make further changes in UI) 4. saved the page and previewed in browser in browser web part was throwing the error while i was able to see it properly in desginer witout any error the error maesseged shown in webpart was: ** Unable to display this Web Part. To troubleshoot the problem, open this Web page in a Windows SharePoint Services-compatible HTML editor such as Microsoft Office SharePoint Designer. If the problem persists, contact your Web server administrator. ** the error message provided in sharepoint log file was : 05/12/2010 17:56:29.54 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Parts 89a1 Monitorable Error while executing web part: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SharePoint.WebPartPages.DataFormWebPart.ResolveParameterValuesToXsl(ArgumentClassWrapper argList) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform() 05/12/2010 17:56:29.62 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Controls 88wy Medium SPDataSourceView.ExecuteSelect() - selectArguments: IsEmpty=True, MaximumRows=0, RetrieveTotalRowCount=False, SortExpression=, StartRowIndex=0, TotalRowCount=-1 05/12/2010 17:56:29.62 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Controls 88x2 Medium SPDataSourceView.ExecuteSelect() - formattedQuery = 1 05/12/2010 17:56:29.64 w3wp.exe (0x19FC) 0x1E9C Windows SharePoint Services Web Parts 89a1 Monitorable Error while executing web part: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SharePoint.WebPartPages.DataFormWebPart.ResolveParameterValuesToXsl(ArgumentClassWrapper argList) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform()

    Read the article

  • jQuery changing fields to substring of related field

    - by Katherine
    Another jquery calculation question. I've this, which is sample code from the plugin site that I am playing with to get this working: function recalc(){ $("[id^=total_item]").calc( "qty * price", { qty: $("input[name^=qty_item_]"), price: $("input[name^=price_item_]"), }, function (s){ return s.toFixed(2);}, function ($this){ var sum = $this.sum(); $("#grandTotal").val( // round the results to 2 digits sum.toFixed(2) ); } ); } Changes in the price fields cause the totals to update: $("input[name^=price_item_]").bind("keyup", recalc); recalc(); Problem is I won't know the value of the price fields, they will be available to me only as a substring of values entered by the user, call it 'itemcode'. There will be a variable number of items, based on a php query. I've come up with this to change the price based on the itemcode: $("input[name^='itemcode_item_1']").keyup(function () { var codeprice = this.value.substring(2,6); $("input[name^='price_item_1']").val(codeprice); }); $("input[name^='itemcode_item_2']").keyup(function () { var codeprice = this.value.substring(2,6); $("input[name^='price_item_2']").val(codeprice); }); However while it does that, it also stops the item_total from updating. Also, I feel there must be a way to not need to write a numbered function for each item on the list. However when I just use $("input[name^='itemcode_item_']") updating any itemcode field updates all price fields, which is not good. Can anyone point me in the right direction? I know I am a bit clueless here, but javascript of any kind is not my thing.

    Read the article

  • cocoa - making a UIImageView class act like a UIButton

    - by Mike
    For some reason that will take too much time to explain, I have to create a UIImageView based class that has some properties of a button. Imagine a class like the UISwitch (no I cannot base my class on the UISwitch) with two states, on and off. WHen the user selects one state, it runs a method like a button that was clicked and the method receives the sender id. I have already the class more or less working. The class is based on UIImageView. I am having difficulties to understand the following. WHen I create a new button I have a line that is like [myButton addTarget:self action:NSSelectorFromString(myMethod) forControlEvents:UIControlEventTouchUpInside]; This line defines a target and an action to run when the button is triggered. I my case I would need something like [myObject addTarget:self action:myMethod ifState:1] //or another thing ifState:2 I have no idea what kind of code I should put on the class to make this work. Remember that as a button the class should send the "sender" information to identify the object which triggered the action... Can you guys, transcendental gurus help? thanks for any help!

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >