Search Results

Search found 58802 results on 2353 pages for 'system xml'.

Page 450/2353 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • What technologies to use for a particle system with enormous calculation demand?

    - by Amir Rezaei
    I have a particle system with X particles. Each particle tests for collision with other particles. This gives X*X = X^2 collision tests per frame. For 60f/s, this corresponds to 60*X^2 collision detection per second. What is the best technological approach for these intensive calculations? Should I use F#, C, C++ or C#, or something else? The following are constraints The code is written in C# with the latest XNA Multi-threaded may be considered No special algorithm that tests the collision with the nearest neighbors or that reduces the problem The last constraint may be strange, so let me explain. Regardless constraint 3, given a problem with enormous computational requirement what would be the best approach to solve the problem. An algorithm reduces the problem; still the same algorithm may behave different depending on technology. Consider pros and cons of CLR vs native C.

    Read the article

  • Javascript - Wait for function to return

    - by LoadData
    So, I am working on a project which requires me to call upon a function to get data from an external source. The issue I am having, I call upon the function - However the code after the function call is continuing before the function has returned a value. Here is the function - function getData() { var myVar; var xmlLoc = $.get("http://ec.urbentom.co.uk/StudentAppData.xml", function(data) { $xml = $(data); myVar = $xml; console.log(myVar); console.log(String($xml)); localStorage.setItem("Data", $xml); console.log(String(localStorage.getItem("Data"))); return myVar; }); return myVar; console.log("Does this continue"); } And here is where it is called upon - $(document).on("pageshow","#Information",function() { $xml = $(getData()); //Here is the function call console.log($xml); //However, it will instantly go to this line before 'getData' has returned a value. $xml.find('AllData').each(function() { $(this).find('item').each(function() { if ($(this).find('Category').text()=="Facilities") { console.log($(this).find('Title').text()); //Do stuff here } else if ($(this).find('Category').text()=="Contacts" || $(this).find('Category').text()=="Information") { console.log($(this).find('Title').text()); //Do stuff here too } }); $('#informationList').html(output).listview().listview("refresh"); console.log("Finished"); }); }); Right now, I'm unsure of why it is not working. My guess is that it is because I am calling a function within a function. Does anyone have any ideas on how this issue can be fixed?

    Read the article

  • What is the best file system to use for a second hard drive when dual booting between WinXP and Win7

    - by Corey
    I am dual booting for legacy reasons, and I have a 2nd internal drive that I would like to use from both XP and 7. Should I go with the standard NTFS? (will the secuirty features be an issue, with different SIDs from the different users) Should I go with FAT32? Should I try out the new exFAT? Also, I curently have two of my 3 drives as "dynamic disks" and 1 spaned volume created on them. (i did this from XP) Win7 can see them/it fine. Is this an ok thing to do?

    Read the article

  • Microsoft Access DB Connection

    - by sikas
    I have a Microsoft Access DB (2003) that I want to connect to it using C# .. The problem I'm facing is I don't have Access installed within the office package .. So I was wondering if it is possible to connect to it as a database to retrieve and update the tables .. Thanks. UPDATE I have received the error below: Error Detected: System.InvalidOperationException: The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine. at System.Data.OleDb.OleDbServicesWrapper.GetDataSource(OleDbConnectionString constr, DataSourceWrapper& datasrcWrapp er) at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInf o, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnection PoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory conne ctionFactory) at System.Data.OleDb.OleDbConnection.Open() at SampleNamespace.SampleClass.Main()

    Read the article

  • Memory usage of strings (or any other objects) in .Net

    - by ava
    I wrote this little test program: using System; namespace GCMemTest { class Program { static void Main(string[] args) { System.GC.Collect(); System.Diagnostics.Process pmCurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); long startBytes = pmCurrentProcess.PrivateMemorySize64; double kbStart = (double)(startBytes) / 1024.0; System.Console.WriteLine("Currently using " + kbStart + "KB."); { int size = 2000000; string[] strings = new string[size]; for(int i = 0; i < size; i++) { strings[i] = "blabla" + i; } } System.GC.Collect(); pmCurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); long endBytes = pmCurrentProcess.PrivateMemorySize64; double kbEnd = (double)(endBytes) / 1024.0; System.Console.WriteLine("Currently using " + kbEnd + "KB."); System.Console.WriteLine("Leaked " + (kbEnd - kbStart) + "KB."); System.Console.ReadKey(); } } } The output in Release build is: Currently using 18800KB. Currently using 118664KB. Leaked 99864KB. I assume that the GC.collect call will remove the allocated strings since they go out of scope, but it appears it does not. I do not understand nor can I find an explanation for it. Maybe anyone here? Thanks, Alex

    Read the article

  • How to read system information in C++ on Windows and Linux?

    - by f4
    I need to read system information like CPU/RAM/disks usage in C++. Maybe swap, network and process too but that's less important. It has probably been done thousand of times before so I first tried to search for a library. Someone here suggested SIGAR, which seems to fit my needs but it has a GPL license and it is for inclusion in a proprietary product. So it's not an option here. I feel like it's something not that easy to implement, as it'll need testing on several platforms. So a library would be welcome. If you don't know of any library, could you point me in the right direction for both platforms?

    Read the article

  • What are the possible tags inside the "global" tag in Magento "config.xml" file?

    - by Knowledge Craving
    Can someone professional experienced Magento developer tell me how to accomplish the following in Magento? I want to know what are the possible tags that can fit in the "global" tag of the "config.xml" page of every module's etc folder? I have tried searching for this answer at many places in Internet but in vain. Please provide the full details along with it for Magento version = 1.4.0.0, because I want at least the users accessing this website find it useful enough, instead of scratching their heads. I really want a detailed explanation, because every newbie like me gets totally confused at this point. From what I know till now, is that in this page, you can set routers, rewrites, cron jobs, admin html, front-end html and many more. But without any strong concepts, none can go ahead with the belief that his code is 100% correct w.r.t. the Magento MVC architecture. So please I want that strong fundamental concept, getting underlined here, with a detailed explanation about it, so that no one gets into this pitfall ever again. Many many thanks to those who can answer only upon knowing the total concept clearly.

    Read the article

  • c# open webrowser in many tab

    - by tee
    how can i create tab tab1 open samsung.com tab2 open hp.com ... using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace browsergotosamsung { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { webBrowser1.Navigate("http://www.samsung.com"); webBrowser2.Navigate("http://www.hp.com"); webBrowser3.Navigate("http://www.IBM.com"); } private void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { } private void webBrowser3_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { } private void webBrowser2_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { webBrowser2.Size } } }

    Read the article

  • New 64 bit linux system has regular processes (ps, grep etc) taking up way too much VIRT mem

    - by user42980
    We just moved from a 32-bit machine to a 64-bit machine. We have quickly ran out of memory despite the new boxes have twice as much ram as the old boxes. Running a simple ps command will illustrate the problem. New machine: 132 prod-Charlotte1-node1 ~/public_html/rearch/cgi-bin ps aux | grep ps root 293 0.0 0.0 0 0 ? S< May09 0:00 [kpsmoused] xamine 2267 1.0 0.0 63728 928 pts/3 R+ 16:50 0:00 ps aux xamine 2268 0.0 0.0 61172 752 pts/3 S+ 16:50 0:00 grep ps Old machine: 132 prod-116431-node1:/home/xamine ps aux | grep ps xamine 23191 0.0 0.0 2332 768 pts/6 R+ 15:41 0:00 ps aux xamine 23192 0.0 0.0 3668 692 pts/6 S+ 15:41 0:00 grep ps Notice that the ps process is using 63M of VIRT mem vs 2 on the old machine. New Machine: Enterprise Linux Enterprise Linux Server release 5.4 (Carthage) Red Hat Enterprise Linux Server release 5.4 (Tikanga) Old Machine: Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Thanks for any thoughts you have!

    Read the article

  • JAXB Unable To Handle Attribute with Colon (:) in name?

    - by Intellectual Tortoise
    I am attempting to use JAXB to unmarshall an XML files whose schema is defined by a DTD (ugh!). The external provider of the DTD has specified one of the element attributes as xml:lang: <!ATTLIST langSet id ID #IMPLIED xml:lang CDATA #REQUIRED > This comes into the xjc-generated class (standard generation; no *.xjb magic) as: @XmlAttribute(name = "xml:lang", required = true) @XmlJavaTypeAdapter(NormalizedStringAdapter.class) protected String xmlLang; However, when unmarshalling valid XML files with JAXB, the xmlLang attribute is always null. When I edited the XML file, replacing xml:lang with lang and changed the @XmlAttribute to match, unmarshalling was successful (i.e. attributes were non-null). I did find this http://old.nabble.com/unmarshalling-ignores-element-attribute-%27xml%27-td22558466.html. But, the resolution there was to convert to XML Schema, etc. My strong preference is to go straight from an un-altered DTD (since it is externally provided and defined by an ISO standard). Is this a JAXB bug? Am I missing something about "namespaces" in attribute names? FWIW, java -version = "build 1.6.0_20-b02" and xjc -version = "xjc version "JAXB 2.1.10 in JDK 6""

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Is there a generic open-source reporting system out there?

    - by syntactico
    I recently started a position at a new company, and one of the first projects they want is an internal reporting system that points at database A, B, C and reports various metrics/statistics/predictions. Basically, the same thing I've done or worked on and every company I've ever been hired by. Since this gets a bit boring after a while, I was wondering if there already exists some sort of open-source package that accomplishes this goal. Ideally, it would work with multiple databases out-of-box (PostgreSQL, MySQL, MSSQL, Oracle minimally), determine relationships between tables (either automatically from their schemas, or allow you to manually set them up after pooling all the tables), allow you to create reports based on a subset of tables, customizing what data you wanted to be displayed/calculated (I suppose this would be challenging since you've no idea what every audience wants, and would need to make this flexible) I'm debating making something like this in my spare time if one does not already exist. Just curious.

    Read the article

  • Should I use a class in this: Reading a XML file using lxml.

    - by PulpFiction
    Hi everyone. This question is in continuation to my previous question, in which I asked about passing around an ElementTree. I need to read the XML files only and to solve this, I decided to create a global ElementTree and then parse it wherever required. My question is: Is this an acceptable practice? I heard global variables are bad. If I don't make it global, I was suggested to make a class. But do I really need to create a class? What benefits would I have from that approach. Note that I would be handling only one ElementTree instance per run, the operations are read-only. If I don't use a class, how and where do I declare that ElementTree so that it available globally? (Note that I would be importing this module) Please answer this question in the respect that I am a beginner to development, and at this stage I can't figure out whether to use a class or just go with the functional style programming approach.

    Read the article

  • How to track conversion rate (clicks to sales) from an internal advertising system?

    - by Ed Woodcock
    I am currently writing an interal advertising system for a company client's website, where the adverts will only be seen by internal users, and all transactions take place internally to the site (i.e. the adverts are for member-only content available on the site). Does anyone have any recommendations as to the best way to track the conversion rate of these adverts (i.e. views:clicks:sales)? EDIT I'm not looking for a 'Why don't you use google analystics'-type answer, I'm looking into possible architecture outlines, i.e. a 'why don't use store a guid in a cache temporarily and see if it ties to the advert' kind of answer. /EDIT In a previous job I did something based on an internal cache, which simply did view:click tracking, however the addition of the sales rate makes this task more complex, especially if we take into account the idea that someone may click through to an advert and not purchase immediately. Cheers, Ed (N.B. I'm leaving this purposely vague in order to (hopefully) get some answers that provide ideas I've yet to have thought of by coming at the problem from a different angle)

    Read the article

  • How do I get transparent, efficient, file system snapshotting or versioning on ext3/4?

    - by shovas
    I've long thought about versioning file systems. This is a killer feature and I've looked at Wayback, ext3cow, zfs, fuse solutions, or just cvs/svn/git overlays. I consider ext3cow the model for my requirements. Transparent, efficient, but I can do without the extra ls abc@timestamp feature. As long as I somehow get automated, transparent versioning of my files. It could be instantaneous or it could be based on snapshots on intervals of 10s, 30s, 1m, 5m, 15m, etc. Just something that will efficiently deal with thousands of files in a given directory all of various sizes, most small, but some upwards of 100m to 1gb. ZFS isn't really an option as I'm on linux (and would prefer not to use it through fuse as I already have an ext3 setup I want to version, not something new). What solutions are out there?

    Read the article

  • Looking for VCS wrapper that tracks system files changing across the whole *nix OS and sends diffs through email

    - by nextus
    I need some software that looks after custom directories across the whole OS (i.e. /etc) and alerting me if someone edit something file inside. Additionally, this tool must automatically commit and push changes into backup server, so I can easily determine when specific change in specific file was made. I'm using cvsbackup right now but I want to create or found something more modern. I think using git as VCS is a great idea. I could have local repository and easily revert changes in my configuration files. Furthermore, pushing changes to the remote repository would helps me to recover my configuration files when the server is fault. It doesn't seems difficult to write some wrapper around the git but there are a lot of problems. For example, I need to track custom directories: /usr/local/nginx/ and /etc/. So the destination point for my git repository is /. I don't need to track the other directories so I must to write overwhelming .gitignore rule: * !.gitignore !/etc/ !etc/* !/usr /usr/* !/usr/local /usr/local/* !/usr/local/nginx !/usr/local/nginx/* It's very daunting and prone to error. So it's maybe a good idea to create intermediate file that wrapper reads and converts to .gitignore format. Additionally, I don't want to keep my .git folder in / partition so I need to set appropriate GIT_DIR and GIT_WORK_TREE variables for git. Is there any ready to use tools for implementation this task? I don't found any but I don't believe that no one needs this feature.

    Read the article

  • How to configure permissions for win2008 task running as Network Service to stop/start a service on different system?

    - by weiji
    Well... title says it all, actually. We've got a Scheduled Task set up on a windows server 2003 box running as the Network Service, and the batch file it runs will invoke "sc" to stop and then start a service on another windows box, however sc reports: [SC] OpenService FAILED 5: Access is denied. Running the same batch file via the windows explorer has no issues, and my user account is part of the Administrators group so I believe this is why there are no issues when I try it manually. Is this a permissions thing I enable for Network Service on the first server? Or do I enable permissions for Network Service somehow on the target server? This question (http://serverfault.com/questions/19382/why-sc-query-fails-from-one-machine-but-works-from-another) touches on something similar, but I'm looking for enabling the Network Service to access the service via the scheduled task.

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >