Search Results

Search found 58802 results on 2353 pages for 'system xml'.

Page 450/2353 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • What is the best file system to use for a second hard drive when dual booting between WinXP and Win7

    - by Corey
    I am dual booting for legacy reasons, and I have a 2nd internal drive that I would like to use from both XP and 7. Should I go with the standard NTFS? (will the secuirty features be an issue, with different SIDs from the different users) Should I go with FAT32? Should I try out the new exFAT? Also, I curently have two of my 3 drives as "dynamic disks" and 1 spaned volume created on them. (i did this from XP) Win7 can see them/it fine. Is this an ok thing to do?

    Read the article

  • Memory usage of strings (or any other objects) in .Net

    - by ava
    I wrote this little test program: using System; namespace GCMemTest { class Program { static void Main(string[] args) { System.GC.Collect(); System.Diagnostics.Process pmCurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); long startBytes = pmCurrentProcess.PrivateMemorySize64; double kbStart = (double)(startBytes) / 1024.0; System.Console.WriteLine("Currently using " + kbStart + "KB."); { int size = 2000000; string[] strings = new string[size]; for(int i = 0; i < size; i++) { strings[i] = "blabla" + i; } } System.GC.Collect(); pmCurrentProcess = System.Diagnostics.Process.GetCurrentProcess(); long endBytes = pmCurrentProcess.PrivateMemorySize64; double kbEnd = (double)(endBytes) / 1024.0; System.Console.WriteLine("Currently using " + kbEnd + "KB."); System.Console.WriteLine("Leaked " + (kbEnd - kbStart) + "KB."); System.Console.ReadKey(); } } } The output in Release build is: Currently using 18800KB. Currently using 118664KB. Leaked 99864KB. I assume that the GC.collect call will remove the allocated strings since they go out of scope, but it appears it does not. I do not understand nor can I find an explanation for it. Maybe anyone here? Thanks, Alex

    Read the article

  • c# open webrowser in many tab

    - by tee
    how can i create tab tab1 open samsung.com tab2 open hp.com ... using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace browsergotosamsung { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { webBrowser1.Navigate("http://www.samsung.com"); webBrowser2.Navigate("http://www.hp.com"); webBrowser3.Navigate("http://www.IBM.com"); } private void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { } private void webBrowser3_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { } private void webBrowser2_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { webBrowser2.Size } } }

    Read the article

  • New 64 bit linux system has regular processes (ps, grep etc) taking up way too much VIRT mem

    - by user42980
    We just moved from a 32-bit machine to a 64-bit machine. We have quickly ran out of memory despite the new boxes have twice as much ram as the old boxes. Running a simple ps command will illustrate the problem. New machine: 132 prod-Charlotte1-node1 ~/public_html/rearch/cgi-bin ps aux | grep ps root 293 0.0 0.0 0 0 ? S< May09 0:00 [kpsmoused] xamine 2267 1.0 0.0 63728 928 pts/3 R+ 16:50 0:00 ps aux xamine 2268 0.0 0.0 61172 752 pts/3 S+ 16:50 0:00 grep ps Old machine: 132 prod-116431-node1:/home/xamine ps aux | grep ps xamine 23191 0.0 0.0 2332 768 pts/6 R+ 15:41 0:00 ps aux xamine 23192 0.0 0.0 3668 692 pts/6 S+ 15:41 0:00 grep ps Notice that the ps process is using 63M of VIRT mem vs 2 on the old machine. New Machine: Enterprise Linux Enterprise Linux Server release 5.4 (Carthage) Red Hat Enterprise Linux Server release 5.4 (Tikanga) Old Machine: Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Thanks for any thoughts you have!

    Read the article

  • JAXB Unable To Handle Attribute with Colon (:) in name?

    - by Intellectual Tortoise
    I am attempting to use JAXB to unmarshall an XML files whose schema is defined by a DTD (ugh!). The external provider of the DTD has specified one of the element attributes as xml:lang: <!ATTLIST langSet id ID #IMPLIED xml:lang CDATA #REQUIRED > This comes into the xjc-generated class (standard generation; no *.xjb magic) as: @XmlAttribute(name = "xml:lang", required = true) @XmlJavaTypeAdapter(NormalizedStringAdapter.class) protected String xmlLang; However, when unmarshalling valid XML files with JAXB, the xmlLang attribute is always null. When I edited the XML file, replacing xml:lang with lang and changed the @XmlAttribute to match, unmarshalling was successful (i.e. attributes were non-null). I did find this http://old.nabble.com/unmarshalling-ignores-element-attribute-%27xml%27-td22558466.html. But, the resolution there was to convert to XML Schema, etc. My strong preference is to go straight from an un-altered DTD (since it is externally provided and defined by an ISO standard). Is this a JAXB bug? Am I missing something about "namespaces" in attribute names? FWIW, java -version = "build 1.6.0_20-b02" and xjc -version = "xjc version "JAXB 2.1.10 in JDK 6""

    Read the article

  • How does the operating system know which parameter to pass to /etc/init.d/ ?

    - by iDev247
    I've been working with linux for a while but in a rather simple manner. I understand that scripts in init.d are executed when the os starts but how exactly does it works? How does the os know which paramater to pass to a script? To start apache I would do sudo /etc/init.d/apache2 start. If I run sudo /etc/init.d/apache2 it doesn't work without the start. How does the os pass start to the script?

    Read the article

  • Microsoft Access DB Connection

    - by sikas
    I have a Microsoft Access DB (2003) that I want to connect to it using C# .. The problem I'm facing is I don't have Access installed within the office package .. So I was wondering if it is possible to connect to it as a database to retrieve and update the tables .. Thanks. UPDATE I have received the error below: Error Detected: System.InvalidOperationException: The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine. at System.Data.OleDb.OleDbServicesWrapper.GetDataSource(OleDbConnectionString constr, DataSourceWrapper& datasrcWrapp er) at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInf o, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnection PoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory conne ctionFactory) at System.Data.OleDb.OleDbConnection.Open() at SampleNamespace.SampleClass.Main()

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Passing values between pages in JavaScript

    - by buni
    using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; using System.Configuration; using System.Text; using System.Web.Services; using System.IO; namespace T_Smade { public partial class ConferenceManagement : System.Web.UI.Page { volatile int i = 0; protected void Page_Load(object sender, EventArgs e) { GetSessionList(); } public void GetSessionList() { string secondResult = ""; string userName = ""; try { if (HttpContext.Current.User.Identity.IsAuthenticated) { userName = HttpContext.Current.User.Identity.Name; } SqlConnection thisConnection = new SqlConnection(@"data Source=ZOLA-PC;AttachDbFilename=D:\2\5.Devp\my DB\ASPNETDB.MDF;Integrated Security=True"); thisConnection.Open(); SqlCommand secondCommand = thisConnection.CreateCommand(); secondCommand.CommandText = "SELECT myApp_Session.session_id FROM myApp_Session, myApp_Role_in_Session where myApp_Role_in_Session.user_name='" + userName + "' and myApp_Role_in_Session.session_id=myApp_Session.session_id"; SqlDataReader secondReader = secondCommand.ExecuteReader(); while (secondReader.Read()) { secondResult = secondResult + secondReader["session_id"].ToString() + ";"; } secondReader.Close(); SqlCommand thisCommand = thisConnection.CreateCommand(); thisCommand.CommandText = "SELECT * FROM myApp_Session;"; SqlDataReader thisReader = thisCommand.ExecuteReader(); while (thisReader.Read()) { test.Controls.Add(GetLabel(thisReader["session_id"].ToString(), thisReader["session_name"].ToString())); string[] compare = secondResult.Split(';'); foreach (string word in compare) { if (word == thisReader["session_id"].ToString()) { test.Controls.Add(GetButton(thisReader["session_name"].ToString(), "Join Session")); } } } thisReader.Close(); thisConnection.Close(); } catch (SqlException ex) { } } private Button GetButton(string id, string name) { Button b = new Button(); b.Text = name; b.ID = "Button_" + id + i; b.Command += new CommandEventHandler(Button_Click); b.CommandArgument = id; i++; return b; } private Label GetLabel(string id, string name) { Label tb = new Label(); tb.Text = name; tb.ID = id; return tb; } protected void Button_Click(object sender, CommandEventArgs e) { Response.Redirect("EnterSession.aspx?session=" + e.CommandArgument.ToString()); } } I have this code when a user clicks a button www.mypage/EnterSession.aspx?session=session_name in the EnterSession.aspx i have used the code below to track the current URL _gaq.push(['pageTrackerTime._trackEvent', 'Category', 'Action', document.location.href, roundleaveSiteEnd]); Now I would also like to track in the Action parameter the session_name from the previous page.see the code below from the previous page test.Controls.Add(GetButton(thisReader["session_name"].ToString(), "Join Session")); Some idea how to do it? Thanx

    Read the article

  • Javascript - Wait for function to return

    - by LoadData
    So, I am working on a project which requires me to call upon a function to get data from an external source. The issue I am having, I call upon the function - However the code after the function call is continuing before the function has returned a value. Here is the function - function getData() { var myVar; var xmlLoc = $.get("http://ec.urbentom.co.uk/StudentAppData.xml", function(data) { $xml = $(data); myVar = $xml; console.log(myVar); console.log(String($xml)); localStorage.setItem("Data", $xml); console.log(String(localStorage.getItem("Data"))); return myVar; }); return myVar; console.log("Does this continue"); } And here is where it is called upon - $(document).on("pageshow","#Information",function() { $xml = $(getData()); //Here is the function call console.log($xml); //However, it will instantly go to this line before 'getData' has returned a value. $xml.find('AllData').each(function() { $(this).find('item').each(function() { if ($(this).find('Category').text()=="Facilities") { console.log($(this).find('Title').text()); //Do stuff here } else if ($(this).find('Category').text()=="Contacts" || $(this).find('Category').text()=="Information") { console.log($(this).find('Title').text()); //Do stuff here too } }); $('#informationList').html(output).listview().listview("refresh"); console.log("Finished"); }); }); Right now, I'm unsure of why it is not working. My guess is that it is because I am calling a function within a function. Does anyone have any ideas on how this issue can be fixed?

    Read the article

  • How to track conversion rate (clicks to sales) from an internal advertising system?

    - by Ed Woodcock
    I am currently writing an interal advertising system for a company client's website, where the adverts will only be seen by internal users, and all transactions take place internally to the site (i.e. the adverts are for member-only content available on the site). Does anyone have any recommendations as to the best way to track the conversion rate of these adverts (i.e. views:clicks:sales)? EDIT I'm not looking for a 'Why don't you use google analystics'-type answer, I'm looking into possible architecture outlines, i.e. a 'why don't use store a guid in a cache temporarily and see if it ties to the advert' kind of answer. /EDIT In a previous job I did something based on an internal cache, which simply did view:click tracking, however the addition of the sales rate makes this task more complex, especially if we take into account the idea that someone may click through to an advert and not purchase immediately. Cheers, Ed (N.B. I'm leaving this purposely vague in order to (hopefully) get some answers that provide ideas I've yet to have thought of by coming at the problem from a different angle)

    Read the article

  • Should I use a class in this: Reading a XML file using lxml.

    - by PulpFiction
    Hi everyone. This question is in continuation to my previous question, in which I asked about passing around an ElementTree. I need to read the XML files only and to solve this, I decided to create a global ElementTree and then parse it wherever required. My question is: Is this an acceptable practice? I heard global variables are bad. If I don't make it global, I was suggested to make a class. But do I really need to create a class? What benefits would I have from that approach. Note that I would be handling only one ElementTree instance per run, the operations are read-only. If I don't use a class, how and where do I declare that ElementTree so that it available globally? (Note that I would be importing this module) Please answer this question in the respect that I am a beginner to development, and at this stage I can't figure out whether to use a class or just go with the functional style programming approach.

    Read the article

  • Is there a generic open-source reporting system out there?

    - by syntactico
    I recently started a position at a new company, and one of the first projects they want is an internal reporting system that points at database A, B, C and reports various metrics/statistics/predictions. Basically, the same thing I've done or worked on and every company I've ever been hired by. Since this gets a bit boring after a while, I was wondering if there already exists some sort of open-source package that accomplishes this goal. Ideally, it would work with multiple databases out-of-box (PostgreSQL, MySQL, MSSQL, Oracle minimally), determine relationships between tables (either automatically from their schemas, or allow you to manually set them up after pooling all the tables), allow you to create reports based on a subset of tables, customizing what data you wanted to be displayed/calculated (I suppose this would be challenging since you've no idea what every audience wants, and would need to make this flexible) I'm debating making something like this in my spare time if one does not already exist. Just curious.

    Read the article

  • What are the possible tags inside the "global" tag in Magento "config.xml" file?

    - by Knowledge Craving
    Can someone professional experienced Magento developer tell me how to accomplish the following in Magento? I want to know what are the possible tags that can fit in the "global" tag of the "config.xml" page of every module's etc folder? I have tried searching for this answer at many places in Internet but in vain. Please provide the full details along with it for Magento version = 1.4.0.0, because I want at least the users accessing this website find it useful enough, instead of scratching their heads. I really want a detailed explanation, because every newbie like me gets totally confused at this point. From what I know till now, is that in this page, you can set routers, rewrites, cron jobs, admin html, front-end html and many more. But without any strong concepts, none can go ahead with the belief that his code is 100% correct w.r.t. the Magento MVC architecture. So please I want that strong fundamental concept, getting underlined here, with a detailed explanation about it, so that no one gets into this pitfall ever again. Many many thanks to those who can answer only upon knowing the total concept clearly.

    Read the article

  • How do I get transparent, efficient, file system snapshotting or versioning on ext3/4?

    - by shovas
    I've long thought about versioning file systems. This is a killer feature and I've looked at Wayback, ext3cow, zfs, fuse solutions, or just cvs/svn/git overlays. I consider ext3cow the model for my requirements. Transparent, efficient, but I can do without the extra ls abc@timestamp feature. As long as I somehow get automated, transparent versioning of my files. It could be instantaneous or it could be based on snapshots on intervals of 10s, 30s, 1m, 5m, 15m, etc. Just something that will efficiently deal with thousands of files in a given directory all of various sizes, most small, but some upwards of 100m to 1gb. ZFS isn't really an option as I'm on linux (and would prefer not to use it through fuse as I already have an ext3 setup I want to version, not something new). What solutions are out there?

    Read the article

  • Are there actually lag times to remove an email address from "the system"? [closed]

    - by Alex Gosselin
    For example, you send an unsubscribe message to a legitimate company or a spam, they reply that they will remove you and it may take up to 72 hours to take effect. I find it hard to believe anything that simple could take more than 3/4 of a second to take effect system wide. Another example would be when you call the visa activation line, there is a "delay" of several minutes while they try to sell you some kind of insurance. Usually just as you get the point across that you don't want it they will tell you your card has been activated and let you go. Are these delays real?

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >