Search Results

Search found 2993 results on 120 pages for 'distributed transactions'.

Page 87/120 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How can I plot a time series graph with Perl?

    - by Jazz
    I have some data from a database (SQLite), mapping a value (an integer) to a date. A date is a string with this format: YYYY-MM-DD hh:mm. The dates are not uniformly distributed. I want do draw a line graph with the dates on X and the values on Y. What is the easiest way to do this with Perl? I tried DBIx::Chart but I could not make it recognize my dates. I also tried GD::Graph, but as the documentation says: GD::Graph does not support numerical x axis the way it should. Data for X axes should be equally spaced

    Read the article

  • How do I configure Jetty (via jettyrunner) so that it names a character set in the Content-Type response header?

    - by Pointy
    I use Jetty (via the oh-so-handy Jetty Runner) for day-to-day web application testing. One thing I've recently stumbled on is the fact that I don't get a character set called out in the "Content-Type" response header all the time. I do get it in response to my application's XMLHttpRequest transactions, but not for plain old pages loaded by <a> links or whatever. I've read a little bit about how to set up a Jetty config file, but I've never been able to completely understand that; all servlet containers are complicated, and while Jetty is pretty simple it's just weird enough that I don't grok the overall idea. Thus, all I do to launch my app is to run the Jetty Runner .jar file with a couple of simple arguments to set up the port number and logfile path, and then I just give it the .war file to run. It works great — except for the missing character set :-) Anybody have a quick sample config file that might fix this? edit — oh if it matters, I'm running Jetty 7.0.0 RC3; I've also tried with a slightly newer version (still 7.something) with exactly the same issue. All my testing is on Ubuntu.

    Read the article

  • WindowsPhone App data connection FAILS in MarketPlace published App but WORKS in Visual Studio development (same XAP)

    - by Tom
    Tearing my hair out(!) My last App update has been accepted and released by MarketPlace but the remote server data connection does NOT work/connect from the downloaded App (from MarketPlace). However, the same App (the accepted XAP) when I'm running it from Visual Studio, using the same remote server address works just fine. WHY!... Has anyone else ever run into anything like this? Here's the remote path: http://www.streamcommunication.com/ZenAwaken/DownloadableCollections.xml I can load that to a browser and retrieve the XML When I'm in Visual Studio I can connect via that path and retrieve the file and consume the data BUT!! The exact same XAP which has been accepted and distributed by Windows Phone marketplace FAILS. Is it possible that MarketPlace does something (encryption?) to the XAP that would corrupt the path string? Any thoughts or experiences would be very helpful! Tom

    Read the article

  • Restore a database with LDF file only

    - by Martin
    First of all, i know how stupid it is not to have a any backup. I can't help it, but i have to (try) to solve it. I have a transaction log (LDF) file from a SQL Server 2000 database that contains all transactions since the creation of the database. No truncation has been done. The MDF file is gone. Probably because of some disk failure. There is no backup. Not from the original database and not from the transaction log. I have tried to link the transaction log to a new clean database. But (ofcourse) that failed because SQL Server checks the identity of both files. I have read about software that can read the transaction log. ApexSQL seems to do that. I tried to install the trial version but it gives weird errors when trying to start the program. Anyone knows a solution for me? It may contain third party software, but i prefer a clean SQL Server solution.

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • Help regarding no sql databases like hadoop, hbase etc

    - by user560370
    I am new to the distributed NoSQL databases like Hadoop, Cassandra, etc. I have few questions for which I seek an expert advice: Can you list problems/challenges one will generally face when making a shift from the present conventional database like MySQL to these large cluster-based databases? What are the difficulties, if any, when one needs to adapt to a newer version of these open source projects? Can you list out the things which are generally stored/kept in memcached for fast rendering of the page? How can I understand the source code of open-source projects so that I can build on it and maybe give back to the community? Above questions may sound to be idiotic and basic but please it's a request for the experts to answer the above questions in detailed and to best of their abilities.

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • Redmine & Git integration

    - by archnemesis
    I am considering moving from svn and Trac to git and Redmine. I'm just wondering what everyone's experience is of this. How well does git integrate with Redmine? I'm pretty set on my decision to change from svn to git - our distributed work, and need to frequently branch and merge would make life considerably easier with git. But we would possibly need to split things into multiple projects for this. From what I have been reading, git and multiple projects don't integrate too smoothly with Trac. That aside, in my investigations into git, Redmine has also caught my attention, and some of the features look very useful. However, I haven't found as many user experiences of git and Redmine as what I'd like (possibly due to my lack of searching skills...) and so would like to hear your opinions and examples.

    Read the article

  • memcached cluster maintenance

    - by Yang
    Scaling up memcached to a cluster of shards/partitions requires either distributed routing/partition table maintenance or centralized proxying (and other stuff like detecting failures). What are the popular/typical approaches/systems here? There's software like libketama, which provides consistent hashing, but this is just a client-side library that reacts to messages about node arrivals/departures---do most users just run something like this, plus separate monitoring nodes that, on detecting failures, notify all the libketamas of the departure? I imagine something like this might be sufficient since typical use of memcached as a soft-state cache doesn't require careful attention to consistency, but I'm curious what people do.

    Read the article

  • Download and replace Android resource files

    - by Casebash
    My application will have some customisation for each company that uses it. Up until now, I have been loading images and strings from resource files. The idea is that the default resources will be distributed with the application and company specific resources will be loaded from our server after they click on a link from an email to launch the initialisation intent. Does anyone know how to replace resource files? I would really like to keep using resource files to avoid rewriting a lot of code/XML. I would distribute the application from our own server, rather than through the app store, so that we could have one version per company, but unfortunately this will give quite nasty security warnings that would concern our customers.

    Read the article

  • Giving proper credit to a projects contributors

    - by Greg B
    I've recently been working with an opensource library for a commercial product. The opensource code is distributed from the website of the company who sells the proprietary product as a zip file. The library is a (direct) port to C# of the original library which is in Java. As such, it uses methods instead of getter/setter properties. The code contains copyright notices to the supplier of the product. The C# port was originally provided to the company by a 3rd party individual. I have modified the source to be more C# like and added a couple of small features. I want to put my version of the code out there (Google code or where ever) so that C# users of the software can benefit from a more native feeling library. How can I and/or how should I amend the copyright notice to give proper credit to The comercial owner of the original source The guy who provided the original C# port Myself and anyone else who contributes to the project in the future The source is provided under the LGPL V2.1,

    Read the article

  • Microphone input

    - by George
    I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing. I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level. Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software? Thanks, /George

    Read the article

  • What files to be included under VSS 6.0

    - by kheat
    For our .net 3.5 web project, what are the files which needs to be included under VSS 6.0? We have a distributed team of three vendors working on separate modules of our .net portal and all of them maintain their own setup and during release they send across the final build. No surprises that this has caused much headache and we have decided that we will keep this environment under our control and checkout the files when required. This is a multi-part questionnaire and to clear some basics first, we would like to know which are the important files to be kept under VSS6.0. Yes we know VSS 6.0 is outdated but we are playing a catchup game and till we move either to TFS or Subversion( atleast six months down the line) we need a VSS strategy. TIA

    Read the article

  • How do I choose a database?

    - by liamzebedee
    I need a comparison table of some sort for database varieties (MySQL, SQLite etc.). I can't find one. My use case is, I am implementing storage of objects in a distributed hash table. I need a database solution that is: Fast for sorting Simplistic (no users, preferably no additional structures like multiple tables etc.) Concurrent (if possible) Multi-platform File based (not stored in memory primarily) Centralized I will be programming in Go. As I understand, I believe I need what is called a Document Orientated Database, because I am storing objects, identified by keys. EDIT: While I am implementing a DHT, I will also be storing metadata about the objects, such as access counts etc. It would also be preferable to have TLL (time to live)

    Read the article

  • Including external C++ libraries in version control

    - by m0tive
    I'm currently starting a project which is going to be developed on a few different computer and I'm keeping in sync with bzr. In the project I'm using a couple of 3rd party libraries, like SDL. In the past I've just pushed a copy of the compiled library to my version control, but that usually seems to massively inflate the size of the branch and generally seem like a bad idea. Is that the normal practice, just pushing the required libraries, or is there a better way of added libraries to distributed version control like bzr or git? (I know on svn you can use svn:external to do something similar to this)

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Missing Memory on Windows Server 2008

    - by Chris Lively
    I have a windows 2008 x64 server with 8GB of RAM installed. Task Manager and Resource Monitor both insist that 7.5GB of the RAM is in use. However, the memory list under Processes (Memory Private Bytes) doesn't add up. I do have Show Processes from all users checked and hand adding the numbers I come up with about 3.5GB of RAM. I also looked at the latest copy of SysInternals Process Explorer. And neither the Private Bytes or Working Set adds up to more than about 3.5GB of RAM in use. What's going on? ===== Update: I bounced the server to see what would happen with the memory utilization. After boot and regular operations began it sat at 3GB of RAM usage. 18 hours later, it's back up to 6.8GB of usage with no indication as to where the additional 3.5GB or so of RAM is being used. Here are links to screen shots of the resource monitor and task manager: Resource Monitor Task Manager Update 2: Well, I believe I located the problem. When I detached one of the larger databases from my sql server the amount of ram shown as "in use" dropped drastically. The Memory Private Bytes count barely moved. So I'm guessing that SQL server has some way of allocating memory where it doesn't really show up in any of the monitors. I went further and created a new database file, then transferred all of the data from the one I detached. Even though it has the same data, and the same transactions going through it, the memory in use has stayed low. Maybe there was some corruption in the DB? I'll leave it to the DB gods and go searching for another "problem" ;)

    Read the article

  • As a programmer what single discovery has given you the greatest boost in productivity?

    - by ChrisInCambo
    This question has been inspired by my recent discovery/adoption of distributed version control. I started using it (mercurial) just because I liked the idea of still being able to make commits at times when I couldn't connect to the central server. I never expected it would give me a large boost in general productivity, but a pleasant side effect I discovered was that making a new clone every time I started a new task and giving that clone a descriptive folder name is extremely effective at keeping me on task resulting is a noticeable productivity increase. So as a programmer what single discovery has given you the greatest boost in productivity? Extra respect for answers which involve tools or practices that aren't so obvious from the outside!

    Read the article

  • How do I protect python code?

    - by Jordfräs
    I am developing a piece of software in python that will be distributed to my employer's customers. My employer wants to limit the usage of the software with a time restricted license file. If we distribute the .py files or even .pyc files it will be easy to (decompile), and remove the code that checks the license file. Another aspect is that my employer do not want the code to be read by our customers, fearing that the code may be stolen or at least the "novel ideas". Is there a good way to handle this problem? Preferably with an off-the-shelf solution. The software will run on Linux systems (so I don't think py2exe will do the trick)

    Read the article

  • How to minimize the amount of place used by GPL copyright notice?

    - by Lukasz Lew
    Gnu GPL page advocates a following header in each file of GPL project: This file is part of Foobar. Foobar is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Foobar is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Foobar. If not, see http://www.gnu.org/licenses/. I find this an over kill. Can't it be shorter and somehow refer to COPYING or LICENCE file?

    Read the article

  • hg unshelve not working

    - by shanebonham
    Our team is just getting started with Mercurial. One of the first things we've started to play with is hg shelve. Locally, I have no problem shelving changes. It all works perfectly from what I can tell. However, when I try to unshelve, I get the restoring backup files message, but when I run hg diff, there are no changes, and my changes are missing from the code. If i do hg unshelve -i I can see the diff, but again, trying to unshelve seems to have no effect. I've been trying to test it with some very simple changes that shouldn't be a problem in terms of conflicts, e.g. adding a test comment. I should note that I've tried hg unshelve -f after which it says unshelve completed but again, my changes are not restored. Any ideas what I am doing wrong? If it matters: Mercurial Distributed SCM (version 1.5.1+20100405)

    Read the article

  • Random numbers from binomial distribution

    - by Sarah
    I need to generate quickly lots of random numbers from binomial distributions for dramatically different trial sizes (most, however, will be small). I was hoping not to have to code an algorithm by hand (see, e.g., this related discussion from November), because I'm a novice programmer and don't like reinventing wheels. It appears Boost does not supply a generator for binomially distributed variates, but TR1 and GSL do. Is there a good reason to choose one over the other, or is it better that I write something customized to my situation? I don't know if this makes sense, but I'll alternate between generating numbers from uniform distributions and binomial distributions throughout the program, and I'd like for them to share the same seed and to minimize overhead. I'd love some advice or examples for what I should be considering.

    Read the article

  • Global variables in hadoop.

    - by Deepak Konidena
    Hi, My program follows a iterative map/reduce approach. And it needs to stop if certain conditions are met. Is there anyway i can set a global variable that can be distributed across all map/reduce tasks and check if the global variable reaches the condition for completion. Something like this. While(Condition != true){ Configuration conf = getConf(); Job job = new Job(conf, "Dijkstra Graph Search"); job.setJarByClass(GraphSearch.class); job.setMapperClass(DijkstraMap.class); job.setReducerClass(DijkstraReduce.class); job.setOutputKeyClass(IntWritable.class); job.setOutputValueClass(Text.class); } Where condition is a global variable that is modified during/after each map/reduce execution.

    Read the article

  • Database for Large number of 1kB data chunks (MySQL?)

    - by The Unknown
    I have a very large dataset, each item in the dataset being roughly 1kB in size. The data needs to be queried rapidly by many applications distributed over a network. The dataset has more than a million items (so 500 million+ 1kB data chunks). What would be the best method to storing this dataset (need to allow adding more items, and reading them rapidly, but never modifying already added data)? Would using a MySQL DB using the binary blob format be appropriate? Or should each of these be stored as files on a file system? edit: the number is 1 million items now, but needs to be able to scale to well over 500 million items easily.

    Read the article

  • Java, merging two arrays evenly

    - by user2435044
    What would be the best way to merge two arrays of different lengths together so they are evenly distributed in the new array? Say I have the following arrays String[] array1 = new String[7]; String[] array2 = new String[2]; String[] mergedArray = new String[array1.length + array2.length]; I would want mergedArray to have the following elements array1 array1 array1 array2 array1 array1 array1 array2 array1 but if I were to change the size of the arrays to String[] array1 = new String[5]; String[] array2 = new String[3]; String[] mergedArray = new String[array1.length + array2.length]; then I would want it to be array1 array2 array1 array2 array1 array2 array1 array1 basically if it can be helped each array2 element shouldn't be touching each other; exception if array2 has a size larger than array1.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >