Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 309/361 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Modify Executing Jar file

    - by pinkynobrain
    Hello Stack Overflow friends. I have a simple problem which i fear doesnt have a simple solution and i need advice as to how to proceed. I am developing a java application packaged as and executable JAR but it requires to modify some of its JAR file contents during execution. At this stage i hit a problem because some OS lock the file preventing writes to it. It is essential that the user sees an updated version of the jar file by the time the application exits allthough i can be pretty flexible as to how to achieve this. A clean and efficient solution is obviously prefereable but portability is the only hard requirement. The following are three approaches i can see to solving the problem, feel free to comment on them or suggest others. Tell Java to unlock the JAR file for writing(this doesnt seem possible but it would be the easyest solution) Copy the executable class files to a tempory file on application startup, use a class loader to load these files and unload the ones from the initial JAR file.(Not had much experience with the classloaders but hopefully the JVM would then be smart enough to realize that the original JAR is nolonger in use and so unlock it) Put a Second executable JAR File inside the First, on startup extract the inner jar to e temporaryfile, invoke a new java process and pass it the location of the Outer JAR, first process exits, second process modifys the Outer jar unincumbered.(This will work but im not sure there is a platform independant way of one java app invoking another) I know this is a weird question but any help would be appreciated.

    Read the article

  • Comparing two collections for equality

    - by Crossbrowser
    I would like to compare two collections (in C#), but I'm not sure of the best way to implement this efficiently. I've read the other thread about Enumerable.SequenceEqual, but it's not exactly what I'm looking for. In my case, two collections would be equal if they both contain the same items (no matter the order). Example: collection1 = {1, 2, 3, 4}; collection2 = {2, 4, 1, 3}; collection1 == collection2; // true What I usually do is to loop through each item of one collection and see if it exists in the other collection, then loop through each item of the other collection and see if it exists in the first collection. (I start by comparing the lengths). if (collection1.Count != collection2.Count) return false; // the collections are not equal foreach (Item item in collection1) { if (!collection2.Contains(item)) return false; // the collections are not equal } foreach (Item item in collection2) { if (!collection1.Contains(item)) return false; // the collections are not equal } return true; // the collections are equal However, this is not entirely correct, and it's probably not the most efficient way to do compare two collections for equality. An example I can think of that would be wrong is: collection1 = {1, 2, 3, 3, 4} collection2 = {1, 2, 2, 3, 4} Which would be equal with my implementation. Should I just count the number of times each item is found and make sure the counts are equal in both collections? The examples are in some sort of C# (let's call it pseudo-C#), but give your answer in whatever language you wish, it does not matter. Note: I used integers in the examples for simplicity, but I want to be able to use reference-type objects too (they do not behave correctly as keys because only the reference of the object is compared, not the content).

    Read the article

  • Online voice chat: Why client-server model vs. peer-to-peer model?

    - by sstallings
    I am adding online voice chat to a Silverlight app. I've been reviewing current apps, services and SDKs found thru online searches and forums. I'm finding that the majority of these implement a client-server (C/S) model and I'm trying to understand why that model versus a peer-to-peer (PTP) model. To me PTP would be preferable because going direct between peers would be more efficient (fewer IP hops and no processing along the way by a server computer) and no need for a server and its costs and dependencies. I found some products offer the ability to switch from PTP to C/S if the PTP proves insufficient. As I thought more about it, I could see that C/S could be better if there are more than two peers involved in a conversation, then the server (supposedly with more bandwidth) could do a better job of relaying each peers outgoing traffic to the multiple other peers. In C/S many-to-many voice chatting, each peer's upstream broadband (which is where the bottleneck inherently is) would only have to carry each item of voice traffic once, then the server would use its superior bandwidth to relay the message to the multiple other peers. But, in a situation with one-on-one voice chatting it seems that PTP would be best. A server would not reduce each of the two peer's bandwidth requirements and would only add unnecessary overhead, dependency and cost. In one-on-one voice chatting: Am I mistaken on anything above? Would peer-to-peer be best? Would a server provide anything of value that could not be provided by a client-only program? Is there anything else that I should be taking into consideration? And lastly, can you recommend any Silverlight PTP or C/S voice chat products? Thanks in advance for any info.

    Read the article

  • Updating a Minimum spanning tree when a new edge is inserted

    - by Lynette
    Hello, I've been presented the following problem in University: Let G = (V, E) be an (undirected) graph with costs ce = 0 on the edges e € E. Assume you are given a minimum-cost spanning tree T in G. Now assume that a new edge is added to G, connecting two nodes v, tv € V with cost c. a) Give an efficient algorithm to test if T remains the minimum-cost spanning tree with the new edge added to G (but not to the tree T). Make your algorithm run in time O(|E|). Can you do it in O(|V|) time? Please note any assumptions you make about what data structure is used to represent the tree T and the graph G. b)Suppose T is no longer the minimum-cost spanning tree. Give a linear-time algorithm (time O(|E|)) to update the tree T to the new minimum-cost spanning tree. This is the solution I found: Let e1=(a,b) the new edge added Find in T the shortest path from a to b (BFS) if e1 is the most expensive edge in the cycle then T remains the MST else T is not the MST It seems to work but i can easily make this run in O(|V|) time, while the problem asks O(|E|) time. Am i missing something? By the way we are authorized to ask for help from anyone so I'm not cheating :D Thanks in advance

    Read the article

  • Is it possible to run my Windows Form application in Windows CE platform?

    - by Fakhrul
    I am new in Windows CE development and never done it yet. Need some advise from the expert in here. In our current project, we are developing a client-server application. The client side is using a windows form application that are base on Windows XP OS while the server is a web base application. This question are related to the client application (Windows Form). This application are using Sql Server Express Edition for data storage. The data is stored in XML object format. It also can transfer a data from client to server via web service. It also interact with hardware such as Magnetic Stripe Reader, Contactless Smart Card Reader, and a thermal printer. Most of the communication between hardware device and systems are base on Serial Port. It is use standard app.config for the configuration and is a multi threaded application. There is a new requirement to use a Handheld device which is use a Windows CE platform. This handheld included the required equipment such as Contactless Smart Card Reader, Printer and Magnetic Stripe Reader. Instead of developing a new client application, is it possible to me to convert my current application that are base on Windows XP to Windows CE? If yes, how can I do that? If no, is it any other brilliant suggestion to do this? Thanks in advance. Software Engineer

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • What objects would get defined in a Bridge scoring app (Javascript)

    - by Alex Mcp
    I'm writing a Bridge (card game) scoring application as practice in javascript, and am looking for suggestions on how to set up my objects. I'm pretty new to OO in general, and would love a survey of how and why people would structure a program to do this (the "survey" begets the CW mark. Additionally, I'll happily close this if it's out of range of typical SO discussion). The platform is going to be a web-app for webkit on the iPhone, so local storage is an option. My basic structure is like this: var Team = { player1: , player2: , vulnerable: , //this is a flag for whether or //not you've lost a game yet, affects scoring scoreAboveLine: , scoreBelowLine: gamesWon: }; var Game = { init: ,//function to get old scores and teams from DB currentBid: , score: , //function to accept whether bid was made and apply to Teams{} save: //auto-run that is called after each game to commit to DB }; So basically I'll instantiate two teams, and then run loops of game.currentBid() and game.score. Functionally the scoring is working fine, but I'm wondering if this is how someone else would choose to break down the scoring of Bridge, and if there are any oversights I've made with regards to OO-ness and how I've abstracted the game. Thanks!

    Read the article

  • C# - parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • Avoid the problem with BigDecimal when migrating to Java 1.4 to Java 1.5+

    - by romaintaz
    Hello, I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here). In my code, the BigDecimal was created from a double using this kind of code: BigDecimal myBD = new BigDecimal("" + someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying. If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases: BigDecimal myBD = new BigDecimal(someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... However, how can I be sure that this solution is the correct way to handle the use of BigDecimal? So my question is to know how I have to manage my BigDecimal values to avoid this issue: Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)? Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)? Any other solution? Environment information: Java 1.6.0_14 Hibernate 2.1.8 (yes, it is a quite old version) Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0 Oracle database 10.2.0.3.0

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • Bit shift and pointer oddities in C, looking for explanations

    - by foo
    Hi all, I discovered something odd that I can't explain. If someone here can see what or why this is happening I'd like to know. What I'm doing is taking an unsigned short containing 12 bits aligned high like this: 1111 1111 1111 0000 I then want to shif the bits so that each byte in the short hold 7bits with the MSB as a pad. The result on what's presented above should look like this: 0111 1111 0111 1100 What I have done is this: unsigned short buf = 0xfff; //align high buf <<= 4; buf >>= 1; *((char*)&buf) >>= 1; This gives me something like looks like it's correct but the result of the last shift leaves the bit set like this: 0111 1111 1111 1100 Very odd. If I use an unsigned char as a temporary storage and shift that then it works, like this: unsigned short buf = 0xfff; buf <<= 4; buf >>= 1; tmp = *((char*)&buf); *((char*)&buf) = tmp >> 1; The result of this is: 0111 1111 0111 1100 Any ideas what is going on here?

    Read the article

  • Implementing parts of rfc4226 (HOTP) in mysql

    - by Moose Morals
    Like the title says, I'm trying to implement the programmatic parts of RFC4226 "HOTP: An HMAC-Based One-Time Password Algorithm" in SQL. I think I've got a version that works (in that for a small test sample, it produces the same result as the Java version in the code), but it contains a nested pair of hex(unhex()) calls, which I feel can be done better. I am constrained by a) needing to do this algorithm, and b) needing to do it in mysql, otherwise I'm happy to look at other ways of doing this. What I've got so far: -- From the inside out... -- Concatinate the users secret, and the number of time its been used -- find the SHA1 hash of that string -- Turn a 40 byte hex encoding into a 20 byte binary string -- keep the first 4 bytes -- turn those back into a hex represnetation -- convert that into an integer -- Throw away the most-significant bit (solves signed/unsigned problems) -- Truncate to 6 digits -- store into otp -- from the otpsecrets table select (conv(hex(substr(unhex(sha1(concat(secret, uses))), 1, 4)), 16, 10) & 0x7fffffff) % 1000000 into otp from otpsecrets; Is there a better (more efficient) way of doing this?

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • searching XML documents using php

    - by dbomb101
    I am trying to make a search function using the combination of DOM, PHP and XML. I got something up and running but the problem is that my search function will only accept exact terms, on top of this am wondering if the method I picked the most efficient $searchTerm = "Lupe"; $doc = new DOMDocument(); foreach (file('musicInformation.xml')as $node) { $xmlString .= trim($node); } $doc->loadXML($xmlString); $records = $doc->documentElement->childNodes; $records = $doc->getElementsByTagName("musicdetails"); foreach( $records as $record ) { $artistnames = $record->getElementsByTagName("artistname"); $artistname = $artistnames->item(0)->nodeValue; $recordnames = $record->getElementsByTagName("recordname"); $recordname = $recordnames->item(0)->nodeValue; $recordtypes = $record->getElementsByTagName("recrodtype"); $recordtype = $recordtypes->item(0)->nodeValue; $formats = $record->getElementsByTagName("format"); $format = $formats->item(0)->nodeValue; $prices = $record->getElementsByTagName("price"); $price = $prices->item(0)->nodeValue; if($searchTerm == $artistname|| $searchTerm == $recordname || $searchTerm == $recordtype ||$searchTerm == $format || $searchTerm == $price) { echo "$artistname - $recordname - $recordtype - $format -$price\n"; }

    Read the article

  • How can I inject multiple repositories in a NServicebus message handler?

    - by Paco
    I use the following: public interface IRepository<T> { void Add(T entity); } public class Repository<T> { private readonly ISession session; public Repository(ISession session) { this.session = session; } public void Add(T entity) { session.Save(entity); } } public class SomeHandler : IHandleMessages<SomeMessage> { private readonly IRepository<EntityA> aRepository; private readonly IRepository<EntityB> bRepository; public SomeHandler(IRepository<EntityA> aRepository, IRepository<EntityB> bRepository) { this.aRepository = aRepository; this.bRepository = bRepository; } public void Handle(SomeMessage message) { aRepository.Add(new A(message.Property); bRepository.Add(new B(message.Property); } } public class MessageEndPoint : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization { public void Init() { ObjectFactory.Configure(config => { config.For<ISession>() .CacheBy(InstanceScope.ThreadLocal) .TheDefault.Is.ConstructedBy(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); config.ForRequestedType(typeof(IRepository<>)) .TheDefaultIsConcreteType(typeof(Repository<>)); } } My problem with the threadlocal storage is, is that the same session is used during the whole application thread. I discovered this when I saw the first level cache wasn't cleared. What I want is using a new session instance, before each call to IHandleMessages<.Handle. How can I do this with structuremap? Do I have to create a message module?

    Read the article

  • Atomic Instructions and Variable Update visibility

    - by dsimcha
    On most common platforms (the most important being x86; I understand that some platforms have extremely difficult memory models that provide almost no guarantees useful for multithreading, but I don't care about rare counter-examples), is the following code safe? Thread 1: someVariable = doStuff(); atomicSet(stuffDoneFlag, 1); Thread 2: while(!atomicRead(stuffDoneFlag)) {} // Wait for stuffDoneFlag to be set. doMoreStuff(someVariable); Assuming standard, reasonable implementations of atomic ops: Is Thread 1's assignment to someVariable guaranteed to complete before atomicSet() is called? Is Thread 2 guaranteed to see the assignment to someVariable before calling doMoreStuff() provided it reads stuffDoneFlag atomically? Edits: The implementation of atomic ops I'm using contains the x86 LOCK instruction in each operation, if that helps. Assume stuffDoneFlag is properly cleared somehow. How isn't important. This is a very simplified example. I created it this way so that you wouldn't have to understand the whole context of the problem to answer it. I know it's not efficient.

    Read the article

  • Getting an Android App to Show Up in the market for "Sony Internet TV"(Google TV)

    - by user1291659
    I'm having a bit of trouble getting my app to show up in the market under GoogleTV. I've searched google's official documentation and I don't believe the manifest lists any elements which would invalidate the program; the only hardware requirement specified is landscape mode, wakelock and external storage(neither which should cause it to be filtered for GTV according to the documentation) and I set the uses touchscreen elements "required" attribute to false. below is the AndroidManifest.xml for my project: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.whateversoft" android:versionCode="2" android:versionName="0.1" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="Color Shafted" android:theme="@style/Theme.NoBackground" android:debuggable="false"> <activity android:label="Color Shafted" android:name=".colorshafted.ColorShafted" android:configChanges = "keyboard|keyboardHidden|orientation" android:screenOrientation = "landscape"> <!-- Set as the default run activity --> <intent-filter > <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:label="Color Shafted Settings" android:name=".colorshafted.Settings" android:theme="@android:style/Theme" android:configChanges = "keyboard|keyboardHidden"> <!-- --> </activity> </application> <!-- DEFINE PERMISSIONS FOR CAPABILITIES --> <uses-permission android:name = "android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name = "android.permission.WAKE_LOCK"/> <uses-feature android:name="android.hardware.touchscreen" android:required="false" /> <!-- END OF PERMISSIONS FOR CAPABILITIES --> </manifest> I'm about to start promoting the app after the next major release so its been kind of a bummer since I can't seem to get this to work. Any help would be appreciated, thanks in advance : )

    Read the article

  • Good Learning Method for Objective-C?

    - by Josh Kahane
    Hi I know this must be asked a millions times and can't be easy to answer as there is o definitive method, but any help would be appreciated, thanks. I have been playing around with all sorts of things in Xcode and with Objective-C, however I can't seem to find a good way of learning things in an efficient way. I have bought the book 'Programming in Objective-C 2.0' and its great but just lays down the basics it seems. I want to learn in the 2D game development direction, then of course 3D after nailing that, if thats the right thing to do? I am 17 currently in year 13, last year of school/A Levels and am almost definitely taking a gap year. Any good, well known reputable courses online or offline (real world)? This is my first programming language, and I am absolutely serious about learning this. One last question, is when learning things online, I have in the past started building a feature and learning a certain aspect in programming only to find out after adding more its slows down the app or its to inefficient. Is the key to use a certain method in a certain situation (being os many ways to do the same thing) or use any of those methods and refine it in your app to make it run smoothly? Sorry, its hard for me to know when I have little experience, thus far. Sorry for rambling on! I would appreciate any help, thank you!

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • SSAS Cube reprocessing fails - then succeeds if I try again

    - by EdgarVerona
    So I'm basically brand new to the concept of BI, and I've inherited an existing ETL process that is a two step process: 1) Loads the data into a database that is only used by the cube processing 2) Starts off the SSAS cube processing against said database It seems pretty well isolated, but occasionally (once a week, sometimes twice) it will fail with the following exception: "Errors in the OLAP storage engine: The attribute key cannot be found" Now the interesting thing is that: 1) The dimension having the issue is not usually the same one (i.e. there's no single dimension that consistently has this failure) 2) The source table, when I inspect it, does actually contain the attribute key that it says could not be found And, most interestingly... 3) If I then immediately reprocess the dimensions and cubes manually through SSMS, they reprocess successfully and without incident. In both the aforementioned job and when I reprocess them through SSMS, I am using "ProcessFull", so it should be reprocessing them completely. Has anyone run into such an issue? I'm scratching my head about it... because if it was a genuine data integrity issue, reprocessing the cube again wouldn't fix it. What on earth could be happening? I've been tasked with finding out why this happens, but I can neither reproduce it consistently nor can I point to a data integrity problem as the root cause. Thanks for any input you can provide!

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Daily, Weekly and Monthly Page View Counter

    - by Jens Fahnenbruck
    I'm building a website with user generated content. On the home page I want to show a list of all created items, and I want to be able to sort them by a view counter. That's sound easy, but I want multiple counters. I want to know which was the most visited item in the last day, last week or last months or overall. My first Idea was to create 4 counter columns in the item's DB-Table. One for each of daily, weekly, monthly and overall, and the create a cron job, that clears the daily counter every 24 hours, the weekly counter every 7 days and so on. But my problem with this is, what happens if I want to know which was the most viewed item of the week, just after the weekly counter got cleared? What I need is an efficient way to create a continous counter, which got reduced for every page view that is too old, and increased for every new page view. Right now I'm thinking of a solution with the redis server, but I have no solution yet. I'm just looking for a general idea here, but FYI I'm developing this application in Ruby on Rails.

    Read the article

  • Is it immoral to write crappy code even if readability and correctness is not a requirement?

    - by mafutrct
    There are cases when crappy (i.e. unreadable and buggy) code is not much of a problem. For instance, imagine you need to generate a big text file that mostly follows a simple pattern with a few very complex exceptions. What do you do? You quickly write a simple algorithm and insert the exceptional bits in the output manually to save 4 hours. The code is unreadable, and the output is flawed, but it's still the correct way since it is way faster. But let's get this straight: I hate bad code. I've had to read and work with code that caused my stomach to hurt. I care a lot about good code. And actually, I caught myself thinking that it is immoral to write bad code even though the dirty approach is sometimes superior. I was surprised by myself and found my idea to be very irrational. Did you ever experience this? Should I just get rid of this stupid idea and use the most efficient approach to coding?

    Read the article

  • Producer/consumer system using database (MySql), is this feasible?

    - by johnrl
    Hi all. I need to use something to coordinate my system with several consumers/producers each running on different machines with different operating systems. I have been researching on using MySql to do this, but it seems ridiculously difficult. My requirements are simple: I want to be able to add or remove consumers/producers at any time and thus they should not depend on each other at all. Naturally a database would separate the two nicely. I have been looking at Q4M message queuing plugin for MySql but it seems complicated to use: I have to recompile it every time I upgrade MySql (can this really be true?) because when I try to install it on Ubuntu 9.10 with MySql 5.1.37 it says "Can't open shared library 'libqueue_engine.so' (errno: 0 API version for STORAGE ENGINE plugin is too different)". There is no precompiled version for MySql 5.1.37 apparently. Also what if I want to run MySql on my windows machine, then I can't rely on this plugin as it only seems to run on Linux and OSX?? I really need some input on how to construct my system best possible.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >