Search Results

Search found 27229 results on 1090 pages for 'default aspx'.

Page 1020/1090 | < Previous Page | 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027  | Next Page >

  • Is there a reason why a base class decorated with XmlInclude would still throw a type unknown exception when serialized?

    - by Tedford
    I will simplify the code to save space but what is presented does illustrate the core problem. I have a class which has a property that is a base type. There exist 3 dervived classes which could be assigned to that property. If I assign any of the derived classes to the container then the XmlSerializer throws dreaded "The type xxx was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically." exception when attempting to seralize the container. However my base class is already decorated with that attribute so I figure there must be an additional "hidden" requirement. The really odd part is that the default WCF serializer has no issues with this class hierarchy. The Container class [DataContract] [XmlRoot(ElementName = "TRANSACTION", Namespace = Constants.Namespace)] public class PaymentSummaryRequest : CommandRequest { /// <summary> /// Gets or sets the summary. /// </summary> /// <value>The summary.</value> /// <remarks></remarks> [DataMember] public PaymentSummary Summary { get; set; } /// <summary> /// Initializes a new instance of the <see cref="PaymentSummaryRequest"/> class. /// </summary> public PaymentSummaryRequest() { Mechanism = CommandMechanism.PaymentSummary; } } The base class [DataContract] [XmlInclude(typeof(xxxPaymentSummary))] [XmlInclude(typeof(yyyPaymentSummary))] [XmlInclude(typeof(zzzPaymentSummary))] [KnownType(typeof(xxxPaymentSummary))] [KnownType(typeof(xxxPaymentSummary))] [KnownType(typeof(zzzPaymentSummary))] public abstract class PaymentSummary { } One of the derived classes [DataContract] public class xxxPaymentSummary : PaymentSummary { } The serialization code var serializer = new XmlSerializer(typeof(PaymentSummaryRequest)); serializer.Serialize(Console.Out,new PaymentSummaryRequest{Summary = new xxxPaymentSummary{}}); The Exception System.InvalidOperationException: There was an error generating the XML document. --- System.InvalidOperationException: The type xxxPaymentSummary was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically. at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write13_PaymentSummary(String n, String ns, PaymentSummary o, Boolean isNullable, Boolean needType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write14_PaymentSummaryRequest(String n, String ns, PaymentSummaryRequest o, Boolean isNullable, Boolean needType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write15_TRANSACTION(Object o) --- End of inner exception stack trace --- at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id) at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o, XmlSerializerNamespaces namespaces) at UserQuery.RunUserAuthoredQuery() in c:\Users\Tedford\AppData\Local\Temp\uqacncyo.0.cs:line 47

    Read the article

  • BufferedReader no longer buffering after a while?

    - by BobTurbo
    Sorry I can't post code but I have a bufferedreader with 50000000 bytes set as the buffer size. It works as you would expect for half an hour, the HDD light flashing every two minutes or so, reading in the big chunk of data, and then going quiet again as the CPU processes it. But after about half an hour (this is a very big file), the HDD starts thrashing as if it is reading one byte at a time. It is still in the same loop and I think I checked free ram to rule out swapping (heap size is default). Probably won't get any helpful answers, but worth a try. OK I have changed heap size to 768mb and still nothing. There is plenty of free memory and java.exe is only using about 300mb. Now I have profiled it and heap stays at about 200MB, well below what is available. CPU stays at 50%. Yet the HDD starts thrashing like crazy. I have.. no idea. I am going to rewrite the whole thing in c#, that is my solution. Here is the code (it is just a throw-away script, not pretty): BufferedReader s = null; HashMap<String, Integer> allWords = new HashMap<String, Integer>(); HashSet<String> pageWords = new HashSet<String>(); long[] pageCount = new long[78592]; long pages = 0; Scanner wordFile = new Scanner(new BufferedReader(new FileReader("allWords.txt"))); while (wordFile.hasNext()) { allWords.put(wordFile.next(), Integer.parseInt(wordFile.next())); } s = new BufferedReader(new FileReader("wikipedia/enwiki-latest-pages-articles.xml"), 50000000); StringBuilder words = new StringBuilder(); String nextLine = null; while ((nextLine = s.readLine()) != null) { if (a.matcher(nextLine).matches()) { continue; } else if (b.matcher(nextLine).matches()) { continue; } else if (c.matcher(nextLine).matches()) { continue; } else if (d.matcher(nextLine).matches()) { nextLine = s.readLine(); if (e.matcher(nextLine).matches()) { if (f.matcher(s.readLine()).matches()) { pageWords.addAll(Arrays.asList(words.toString().toLowerCase().split("[^a-zA-Z]"))); words.setLength(0); pages++; for (String word : pageWords) { if (allWords.containsKey(word)) { pageCount[allWords.get(word)]++; } else if (!word.isEmpty() && allWords.containsKey(word.substring(0, word.length() - 1))) { pageCount[allWords.get(word.substring(0, word.length() - 1))]++; } } pageWords.clear(); } } } else if (g.matcher(nextLine).matches()) { continue; } words.append(nextLine); words.append(" "); }

    Read the article

  • Align text box with menu control using boostrap

    - by JamesMitLondon
    Hi I am using bootstrap styles for my asp.net web application and I have a menu control at the top. I want to insert a search text box at the top right on the same line as the menu bar. Following is my code. Can anyone please suggest how to do this? Thanks. <div id="container"> <form runat="server" class="navbar-form navbar-left" role="search"> <div class = "navbar"> <div class="navbar-inner"> <div class="container"> <!-- .btn-navbar is used as the toggle for collapsed navbar content --> <a class="btn btn-navbar" data-target=".nav-collapse" data-toggle="collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </a> <div class="nav-collapse collapse"> <asp:Menu ID="NavigationMenu" runat="server" EnableViewState="false" IncludeStyleBlock="false" Orientation="Horizontal" CssClass="navbar" StaticMenuStyle-CssClass="nav" StaticSelectedStyle-CssClass="active" DynamicMenuStyle-CssClass="dropdown-menu"> <Items> <asp:MenuItem Text="Home" ToolTip="Home"></asp:MenuItem> <asp:MenuItem Text="Music" ToolTip="Music"> <asp:MenuItem Text="Classical" ToolTip="Classical" /> <asp:MenuItem Text="Rock" ToolTip="Rock" /> <asp:MenuItem Text="Jazz" ToolTip="Jazz" /> </asp:MenuItem> <asp:MenuItem Text="Movies" ToolTip="Movies"> <asp:MenuItem Text="Action" ToolTip="Action" /> <asp:MenuItem Text="Drama" ToolTip="Drama" /> <asp:MenuItem Text="Musical" ToolTip="Musical" /> </asp:MenuItem> </Items> </asp:Menu> <div class="form-group"> <input type="text" class="form-control" placeholder="Search"/> <button type="submit" class="btn btn-default">Submit</button> </div> </div> </div> </div> </div> </form> </div> `

    Read the article

  • How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIFTH UPDATE: Finally, I managed to do it. In the meantime I checked out some solutions what made me realize that the problem lies elsewhere. I've tested custom WebResponse and WebRequest objects, library libCURL.NET and even Sockets. The difference in time was gzip compression. Compressed stream lenght was simply half the normal stream lenght and thus download time was less than 3 seconds with the browser. I put some code if someone will want to know how i solved this: (some headers are not needed) public static string DownloadString(string URL) { WebClient client = new WebClient(); client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5"; client.Headers["Accept"] = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; client.Headers["Accept-Encoding"] = "gzip,deflate,sdch"; client.Headers["Accept-Charset"] = "ISO-8859-2,utf-8;q=0.7,*;q=0.3"; Stream inputStream = client.OpenRead(new Uri(URL)); MemoryStream memoryStream = new MemoryStream(); const int size = 32 * 4096; byte[] buffer = new byte[size]; if (client.ResponseHeaders["Content-Encoding"] == "gzip") { inputStream = new GZipStream(inputStream, CompressionMode.Decompress); } int count = 0; do { count = inputStream.Read(buffer, 0, size); if (count > 0) { memoryStream.Write(buffer, 0, count); } } while (count > 0); string result = Encoding.Default.GetString(memoryStream.ToArray()); memoryStream.Close(); inputStream.Close(); return result; } I think that asyncro functions will be almost the same. But i will simply use another thread to fire this function. I dont need percise progress indication.

    Read the article

  • Centering contentplaceholders

    - by gh9
    I have a contentplaceholder which needs to be centered (as close to absolute as possible) to the center of the page. The issue I am having is when, the browser moves to different resolutions the centering is thrown off. I have tried using divs and tables. I have switched out the precent with em units, and still no workie. Any help would be greatly appreciated. My end goal is to have the contentplaceholder always be in the center of page regardless of the resolution of the monitor <div style="width:20%"/> <div style="width:60"> contentplaceholercode </div> <div style="width:20%"/> <table> <tr > <td style="width:20%"> </td> <td style="width:60> </td> <td style="width:20%"> </td> </tr> </table> code for master and code for content page <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Master.master.cs" Inherits="WorkRecordr.Master" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <script src="Assets/Scripts/Jqeury1.4.4.js" type="text/javascript"></script> <title></title> <style type="text/css"> body { background-color: #326598; } .outer { width: 100%; border: solid 1px gray; padding: 1px; } .inner { border: solid 1px gray; width: 70%; margin-left: auto; margin-right: auto; } </style> </head> <body> <div style="text-align: center"> <asp:ContentPlaceHolder ID="head" runat="server"> </asp:ContentPlaceHolder> </div> <form id="form1" runat="server"> <div class="outer"> <div class="inner"> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </div> </form> </body> </html> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="test.aspx.cs" Inherits="WorkRecordr.test" MasterPageFile="~/Master.Master" %> <asp:Content ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> aaaaaaaaa***aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa***aaaaaaaaaaaaaaaa </asp:Content>

    Read the article

  • 1180: Call to a possibly undefined method addEventListener

    - by Chris
    I'm going through some AS3 training, but I'm getting a weird error... I'm trying to add an event listener to the end of a motion tween in AS. I've created a tween, highlighted the frames, right clicked and copied the tween as AS and pasted it into the movie clip (I think there's a better way to do this, but I'm not sure what it is...) When I try to add the listener to the end of that code, I get the error. Here's my code. import fl.motion.AnimatorFactory; import fl.motion.MotionBase; import fl.motion.Motion; import flash.filters.*; import flash.geom.Point; import fl.motion.MotionEvent; import fl.events.*; var __motion_Enemy_3:MotionBase; if(__motion_Enemy_3 == null) { __motion_Enemy_3 = new Motion(); __motion_Enemy_3.duration = 30; // Call overrideTargetTransform to prevent the scale, skew, // or rotation values from being made relative to the target // object's original transform. // __motion_Enemy_3.overrideTargetTransform(); // The following calls to addPropertyArray assign data values // for each tweened property. There is one value in the Array // for every frame in the tween, or fewer if the last value // remains the same for the rest of the frames. __motion_Enemy_3.addPropertyArray("x", [0]); __motion_Enemy_3.addPropertyArray("y", [0]); __motion_Enemy_3.addPropertyArray("scaleX", [1.000000,1.048712,1.097424,1.146136,1.194847,1.243559,1.292271,1.340983,1.389695,1.438407,1.487118,1.535830,1.584542,1.633254,1.681966,1.730678,1.779389,1.828101,1.876813,1.925525,1.974237,2.022949,2.071661,2.120372,2.169084,2.217796,2.266508,2.315220,2.363932,2.412643]); __motion_Enemy_3.addPropertyArray("scaleY", [1.000000,1.048712,1.097424,1.146136,1.194847,1.243559,1.292271,1.340983,1.389695,1.438407,1.487118,1.535830,1.584542,1.633254,1.681966,1.730678,1.779389,1.828101,1.876813,1.925525,1.974237,2.022949,2.071661,2.120372,2.169084,2.217796,2.266508,2.315220,2.363932,2.412643]); __motion_Enemy_3.addPropertyArray("skewX", [0]); __motion_Enemy_3.addPropertyArray("skewY", [0]); __motion_Enemy_3.addPropertyArray("rotationConcat", [0]); __motion_Enemy_3.addPropertyArray("blendMode", ["normal"]); __motion_Enemy_3.addPropertyArray("cacheAsBitmap", [false]); __motion_Enemy_3.addEventListener(MotionEvent.MOTION_END, hurtPlayer); // Create an AnimatorFactory instance, which will manage // targets for its corresponding Motion. var __animFactory_Enemy_3:AnimatorFactory = new AnimatorFactory(__motion_Enemy_3); __animFactory_Enemy_3.transformationPoint = new Point(0.499558, 0.500000); // Call the addTarget function on the AnimatorFactory // instance to target a DisplayObject with this Motion. // The second parameter is the number of times the animation // will play - the default value of 0 means it will loop. // __animFactory_Enemy_3.addTarget(<instance name goes here>, 0); } function hurtPlayer(event:MotionEvent):void { this.parent.removeChild(this); } I've tried a few places for it, both with the animFactory_Enemy_3 variable and the motion_Enemy_3 variable - getting the same error both times.

    Read the article

  • php connecting to mysql server(localhost) very slow

    - by Ahmad
    actually its little complicated: summary: the connection to DB is very slow. the page rendering takes around 10 seconds but the last statement on the page is an echo and i can see its output while the page is loading in firefox (IE is same). in google chrome the output becomes visible only when the loading finishes. loading time is approximately the same across browsers. on debugging i found out that its the DB connectivity that is creating problem. the DB was on another machine. to debug further. i deployed the DB on my local machine .. so now the DB connection is at 127.0.0.1 but the connectivity still takes long time. this means that the issue is with APACHE/PHP and not with mysql. but then i deployed my code on another machine which connects to DB remotely.and everything seems fine. basically the application uses couple of mod_rewrite.. but i removed all the .htaccess files and the slow connectivity issue remains.. i installed another APACHE on my machine and used default settings. the connection was still very slow. i added following statements to measure the execution time $stime = microtime(); $stime = explode(" ",$stime); $stime = $stime[1] + $stime[0]; // my code -- it involves connection to DB $mtime = microtime(); $mtime = explode(" ",$mtime); $mtime = $mtime[1] + $mtime[0]; $totaltime = ($mtime - $stime); echo $totaltime; the output is 0.0631899833679 but firebug Net panel shows total loading time of 10-11 seconds. same is the case with google chrome i tried to turn off windows firewall.. connectivity is still slow and i just can't quite find the reason.. i've tried multiple DB servers.. multiple apaches.. nothing seems to be working.. any idea of what might be the problem?

    Read the article

  • How to compute a unicode string which bidirectional representation is specified?

    - by valdo
    Hello, fellows. I have a rather pervert question. Please forgive me :) There's an official algorithm that describes how bidirectional unicode text should be presented. http://www.unicode.org/reports/tr9/tr9-15.html I receive a string (from some 3rd-party source), which contains latin/hebrew characters, as well as digits, white-spaces, punctuation symbols and etc. The problem is that the string that I receive is already in the representation form. I.e. - the sequence of characters that I receive should just be presented from left to right. Now, my goal is to find the unicode string which representation is exactly the same. Means - I need to pass that string to another entity; it would then render this string according to the official algorithm, and the result should be the same. Assuming the following: The default text direction (of the rendering entity) is RTL. I don't want to inject "special unicode characters" that explicitly override the text direction (such as RLO, RLE, etc.) I suspect there may exist several solutions. If so - I'd like to preserve the RTL-looking of the string as much as possible. The string usually consists of hebrew words mostly. I'd like to preserve the correct order of those words, and characters inside those words. Whereas other character sequences may (and should) be transposed. One naive way to solve this is just to swap the whole string (this takes care of the hebrew words), and then swap inside it sequences of non-hebrew characters. This however doesn't always produce correct results, because actual rules of representation are rather complex. The only comprehensive algorithm that I see so far is brute-force check. The string can be divided into sequences of same-class characters. Those sequences may be joined in random order, plus any of them may be reversed. I can check all those combinations to obtain the correct result. Plus this technique may be optimized. For instance the order of hebrew words is known, so we only have to check different combinations of their "joining" sequences. Any better ideas? If you have an idea, not necessarily the whole solution - it's ok. I'll appreciate any idea. Thanks in advance.

    Read the article

  • How to cache queries in EJB and return result efficient (performance POV)

    - by Maxym
    I use JBoss EJB 3.0 implementation (JBoss 4.2.3 server) At the beginning I created native query all the time using construction like Query query = entityManager.createNativeQuery("select * from _table_"); Of couse it is not that efficient, I performed some tests and found out that it really takes a lot of time... Then I found a better way to deal with it, to use annotation to define native queries: @NamedNativeQuery( name = "fetchData", value = "select * from _table_", resultClass=Entity.class ) and then just use it Query query = entityManager.createNamedQuery("fetchData"); the performance of code line above is two times better than where I started from, but still not that good as I expected... then I found that I can switch to Hibernate annotation for NamedNativeQuery (anyway, JBoss's implementation of EJB is based on Hibernate), and add one more thing: @NamedNativeQuery( name = "fetchData2", value = "select * from _table_", resultClass=Entity.class, readOnly=true) readOnly - marks whether the results are fetched in read-only mode or not. It sounds good, because at least in this case of mine I don't need to update data, I wanna just fetch it for report. When I started server to measure performance I noticed that query without readOnly=true (by default it is false) returns result with each iteration better and better, and at the same time another one (fetchData2) works like "stable" and with time difference between them is shorter and shorter, and after 5 iterations speed of both was almost the same... The questions are: 1) is there any other way to speed query using up? Seems that named queries should be prepared once, but I can't say it... In fact if to create query once and then just use it it would be better from performance point of view, but it is problematic to cache this object, because after creating query I can set parameters (when I use ":variable" in query), and it changes query object (isn't it?). well, is here any way to cache them? Or named query is the best option I can use? 2) any other approaches how to make results retrieveng faster. I mean, for instance I don't need those Entities to be attached, I won't update them, all I need is just fetch collection of data. Maybe readOnly is the only available way, so I can't speed it up, but who knows :) P.S. I don't ask about DB performance, all I need now is how not to create query all the time, so use it efficient, and to "allow" EJB to do less job with the same result concerning data returning.

    Read the article

  • Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)

    - by Nailuj
    I'm developing an application (winforms C# .NET 4.0) where I access a lookup functionality from a 3rd party through a simple HTTP request. I call an url with a parameter, and in return I get a small string with the result of the lookup. Simple enough. The challenge is however, that I have to do lots of these lookups (a couple of thousands), and I would like to limit the time needed. Therefore I would like to run requests in parallel (say 10-20). I use a ThreadPool to do this, and the short version of my code looks like this: public void startAsyncLookup(Action<LookupResult> returnLookupResult) { this.returnLookupResult = returnLookupResult; foreach (string number in numbersToLookup) { ThreadPool.QueueUserWorkItem(lookupNumber, number); } } public void lookupNumber(Object threadContext) { string numberToLookup = (string)threadContext; string url = @"http://some.url.com/?number=" + numberToLookup; WebClient webClient = new WebClient(); Stream responseData = webClient.OpenRead(url); LookupResult lookupResult = parseLookupResult(responseData); returnLookupResult(lookupResult); } I fill up numbersToLookup (a List<String>) from another place, call startAsyncLookup and provide it with a call-back function returnLookupResult to return each result. This works, but I found that I'm not getting the throughput I want. Initially I thought it might be the 3rd party having a poor system on their end, but I excluded this by trying to run the same code from two different machines at the same time. Each of the two took as long as one did alone, so I could rule out that one. A colleague then tipped me that this might be a limitation in Windows. I googled a bit, and found amongst others this post saying that by default Windows limits the number of simultaneous request to the same web server to 4 for HTTP 1.0 and to 2 for HTTP 1.1 (for HTTP 1.1 this is actually according to the specification (RFC2068)). The same post referred to above also provided a way to increase these limits. By adding two registry values to [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] (MaxConnectionsPerServer and MaxConnectionsPer1_0Server), I could control this myself. So, I tried this (sat both to 20), restarted my computer, and tried to run my program again. Sadly though, it didn't seem to help any. I also kept an eye on the Resource Monitor (see screen shot) while running my batch lookup, and I noticed that my application (the one with the title blacked out) still only was using two TCP connections. So, the question is, why isn't this working? Is the post I linked to using the wrong registry values? Is this perhaps not possible to "hack" in Windows any longer (I'm on Windows 7)? Any ideas would be highly appreciated :) And just in case anyone should wonder, I have also tried with different settings for MaxThreads on ThreadPool (everyting from 10 to 100), and this didn't seem to affect my throughput at all, so the problem shouldn't be there either.

    Read the article

  • bind parent child listview asp.net

    - by Chris
    I'm trying to display two linq query results in a prent and child listview. I have the following code which gets the correct values to populate the listviews but when I nest the child listview inside the parent list view, I get an error saying "The name 'ListView2' does not exist in the current context". I presume I need to bind the two listviews in the code behind but my problem is I don't know how or the best way to do this. I have read a couple of posts on similar problems but my lack of knowledge on the subject doesn't make it clear to me. Please can somebody help me figure this out? It's the final piece of code I need to complete. Many thanks in advance. Here is my .aspx code: <asp:ListView ID="ListView1" runat="server"> <EmptyDataTemplate>No data was returned.</EmptyDataTemplate> <ItemSeparatorTemplate><br /></ItemSeparatorTemplate> <ItemTemplate> <li> <asp:Label ID="LabelID" runat="server" Text='<%# Eval("RecordID") %>'></asp:Label><br /> <asp:Label ID="LabelNumber" runat="server" Text='<%# Eval("CartID") %>'></asp:Label><br /> <asp:Label ID="LabelName" runat="server" Text='<%# Eval("Number") %>'></asp:Label><br /> <asp:Label ID="LabelDestination" runat="server" Text='<%# Eval("Destination") %>'></asp:Label><br /> <asp:Label ID="LabelPkgName" runat="server" Text='<%# Eval("PkgName") %>'></asp:Label> <li> <!-- LIST VIEW FOR FEATURES --> <asp:ListView ID="ListView2" runat="server"> <EmptyDataTemplate>No data was returned.</EmptyDataTemplate> <ItemSeparatorTemplate><br /></ItemSeparatorTemplate> <ItemTemplate> <li> <asp:Label ID="LabelFeatureName" runat="server" Text='<%# Eval("FeatureName") %>'></asp:Label><br /> <asp:Label ID="LabelFeatureSetUp" runat="server" Text='<%# Eval("FeatureSetUp") %>'></asp:Label><br /> <asp:Label ID="LabelFeatureMonthly" runat="server" Text='<%# Eval("FeatureMonthly") %>'></asp:Label><br /> </li> </ItemTemplate> <LayoutTemplate> <ul ID="itemPlaceholderContainer" runat="server" style=""> <li runat="server" id="itemPlaceholder" /> </ul> </LayoutTemplate> </asp:ListView> <!-- LIST VIEW FOR FEATURES [END] --> </li> </li> </ItemTemplate> <LayoutTemplate> <ul ID="itemPlaceholderContainer" runat="server" style=""> <li runat="server" id="itemPlaceholder" /> </ul> </LayoutTemplate> </asp:ListView> Here is my code behind: MyShoppingCart userShoppingCart = new MyShoppingCart(); string cartID = userShoppingCart.GetShoppingCartId(); using (ShoppingCartv2Entities db = new ShoppingCartv2Entities()) { var CartNumber = from c in db.NewViews where c.CartID == cartID select c; foreach (NewView item in CartNumber) { ListView1.DataSource = CartNumber; ListView1.DataBind(); } var CartFeature = from f in db.NewViews join o in db.NumberFeatureViews on f.RecordID equals o.RecordID where f.CartID == cartID select new { o.FeatureName, o.FeatureSetUp, o.FeatureMonthly }; foreach (var x in CartFeature) { ListView2.DataSource = CartFeature; ListView2.DataBind(); } }

    Read the article

  • initializing a vector of custom class in c++

    - by Flamewires
    Hey basically Im trying to store a "solution" and create a vector of these. The problem I'm having is with initialization. Heres my class for reference class Solution { private: // boost::thread m_Thread; int itt_found; int dim; pfn_fitness f; double value; std::vector<double> x; public: Solution(size_t size, int funcNo) : itt_found(0), x(size, 0.0), value(0.0), dim(30), f(Eval_Functions[funcNo]) { for (int i = 1; i < (int) size; i++) { x[i] = ((double)rand()/((double)RAND_MAX))*maxs[funcNo]; } } Solution() : itt_found(0), x(31, 0.0), value(0.0), dim(30), f(Eval_Functions[1]) { for (int i = 1; i < 31; i++) { x[i] = ((double)rand()/((double)RAND_MAX))*maxs[1]; } } Solution operator= (Solution S) { x = S.GetX(); itt_found = S.GetIttFound(); dim = S.GetDim(); f = S.GetFunc(); value = S.GetValue(); return *this; } void start() { value = f (dim, x); } /* plus additional getter/setter methods*/ } Solution S(30, 1) or Solution(2, 5) work and initalizes everything, but I need X of these solution objects. std::vector<Solution> Parents(X) will create X solutions with the default constructor and i want to construct using the (int, int) constructor. Is there any easy(one liner?) way to do this? Or would i have to do something like: size_t numparents = 10; vector<Solution> Parents; Parents.reserve(numparents); for (int i = 0; i<(int)numparents; i++) { Solution S(31, 0); Parents.push_back(S); }

    Read the article

  • Why does DebugActiveProcessStop crash my debugging app?

    - by SparkyNZ
    I have a debugging program which I've written to attach to a process and create a crash dump file. That part works fine. The problem I have is that when the debugger program terminates, so does the program that it was debugging. I did some Googling and found the DebugActiveProcessStop() API call. This didn't show up in my older MSDN documentation as it was only introduced in Windows XP so I've tried loading it dynamicall from Kernel32.dll at runtime. Now my problem is that my debugger program crashes as soon as the _DebugActiveProcessStop() call is made. Can somebody please tell me what I'm doing wrong? typedef BOOL (*DEBUGACTIVEPROCESSSTOP)(DWORD); DEBUGACTIVEPROCESSSTOP _DebugActiveProcessStop; HMODULE hK32 = LoadLibrary( "kernel32.dll" ); if( hK32 ) _DebugActiveProcessStop = (DEBUGACTIVEPROCESSSTOP) GetProcAddress( hK32,"DebugActiveProcessStop" ); else { printf( "Can't load Kernel32.dll\n" ); return; } if( ! _DebugActiveProcessStop ) { printf( "Can't find DebugActiveProcessStop\n" ); return; } ... void DebugLoop( void ) { DEBUG_EVENT de; while( 1 ) { WaitForDebugEvent( &de, INFINITE ); switch( de.dwDebugEventCode ) { case CREATE_PROCESS_DEBUG_EVENT: hProcess = de.u.CreateProcessInfo.hProcess; break; case EXCEPTION_DEBUG_EVENT: // PDS: I want a crash dump immediately! dwProcessId = de.dwProcessId; dwThreadId = de.dwThreadId; WriteCrashDump( &de.u.Exception ); return; case CREATE_THREAD_DEBUG_EVENT: case OUTPUT_DEBUG_STRING_EVENT: case EXIT_THREAD_DEBUG_EVENT: case EXIT_PROCESS_DEBUG_EVENT : case LOAD_DLL_DEBUG_EVENT: case UNLOAD_DLL_DEBUG_EVENT: case RIP_EVENT: default: break; } ContinueDebugEvent( de.dwProcessId, de.dwThreadId, DBG_CONTINUE ); } } ... void main( void ) { ... BOOL bo = DebugActiveProcess( dwProcessId ); if( bo == 0 ) printf( "DebugActiveProcess failed, GetLastError: %u \n",GetLastError() ); hProcess = OpenProcess( PROCESS_ALL_ACCESS, TRUE, dwProcessId ); if( hProcess == NULL ) printf( "OpenProcess failed, GetLastError: %u \n",GetLastError() ); DebugLoop(); _DebugActiveProcessStop( dwProcessId ); CloseHandle( hProcess ); }

    Read the article

  • Where can I find sample XHTML5 source codes?

    - by Bytecode Ninja
    Where can I find sample *X*HTML 5 pages? I mainly want to know if it is possible to mix and match XHTML 5 with other XML languages just like XHTML 1 or not. For example is something like this valid in XHTML 5? <!DOCTYPE html PUBLIC "WHAT SHOULD BE HERE?" "WHAT SHOULD BE HERE?"> <html xmlns="WHAT SHOULD BE HERE?" xmlns:ui="http://java.sun.com/jsf/facelets"> <head> <title><ui:insert name="title">Default title</ui:insert></title> <link rel="stylesheet" type="text/css" href="./css/main.css"/> </head> <body> <div id="header"> <ui:insert name="header"> <ui:include src="header.xhtml"/> </ui:insert> </div> <div id="left"> <ui:insert name="navigation" > <ui:include src="navigation.xhtml"/> </ui:insert> </div> <div id="center"> <br /> <span class="titleText"> <ui:insert name="title" /> </span> <hr /> <ui:insert name="content"> <div> <ui:include src="content.xhtml"/> </div> </ui:insert> </div> <div id="right"> <ui:insert name="news"> <ui:include src="news.xhtml"/> </ui:insert> </div> <div id="footer"> <ui:insert name="footer"> <ui:include src="footer.xhtml"/> </ui:insert> </div> </body> </html> Thanks in advance.

    Read the article

  • Maven Ant BuildException with maven-antrun-plugin ... unable to find javac compiler

    - by robsbobs
    Im trying to make Maven call an ANT build for some legacy code. the ant build builds correctly through ant however when i call it using the maven ant plugin it fails with the following error: [ ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project CoreServices: An Ant BuildException has occured: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:158: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:62: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:33: The following error occurred while executing this line: [ERROR] C:\dev\projects\ods\build.xml:41: Unable to find a javac compiler; [ERROR] com.sun.tools.javac.Main is not on the classpath. [ERROR] Perhaps JAVA_HOME does not point to the JDK. [ERROR] It is currently set to "C:\bea\jdk150_11\jre" My javac exists at C:\bea\jdk150_11\bin and this works for all other things. Im not sure where Maven is getting this version of JAVA_HOME. JAVA_HOME in windows envionrmental variables is set to C:\bea\jdk150_11\ as it should be. The Maven code that im using to call the build.xml is <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>install</phase> <configuration> <target> <ant antfile="../build/build.xml" target="deliver" > </ant> </target> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build>

    Read the article

  • Creating multiple heads in remote repository

    - by Jab
    We are looking to move our team (~10 developers) from SVN to mercurial. We are trying to figure out how to manage our workflow. In particular, we are trying to see if creating remote heads is the right solution. We currently have a very large repository with multiple, related projects. They share a lot of code, but pieces of the project are deployed by different teams (3 teams) independent of other portions of the code-base. So each team is working on concurrent large features. The way we currently handles this in SVN are branches. Team1 has a branch for Feature1, same deal for the other teams. When Team1 finishes their change, it gets merged into the trunk and deployed out. The other teams follow suite when their project is complete, merging of course. So my initial thought are using Named Branches for these situations. Team1 makes a Feature1 branch off of the default branch in Hg. Now, here is the question. Should the team PUSH that branch, in it's current/half-state to the repository. This will create a second head in the core repo. My initial reaction was "NO!" as it seems like a bad idea. Handling multiple heads on our repository just sounds awful, but there are some advantages... First, the teams want to setup Continuous Integration to build this branch during their development cycle(months long). This will only work if the CI can pull this branch from the repo. This is something we do now with SVN, copy a CI build and change the branch. Easy. Second, it makes it easier for any team member to jump onto the branch and start working. Without pushing to the core repo, they would have to receive a push from a developer on that team with the changeset information. It is also possible to lose local commits to hardware failure. The chances increase a lot if it's a branch by a single developer who has followed the "don't push until finished" approach. And lastly is just for ease of use. The developers can easily just commit and push on their branch at any time without consequence(as they do today, in their SVN branches). Is there a better way to handle this scenario that I may be missing? I just want a veteran's opinion before moving forward with the strategy. For bug fixes we like the general workflow of mecurial, anonymous branches that only consist of 1-2 commits. The simplicity is great for those cases. By the way, I've read this , great article which seems to favor Named branches.

    Read the article

  • Dynamic Dispatch without Virtual Functions

    - by Kristopher Johnson
    I've got some legacy code that, instead of virtual functions, uses a kind field to do dynamic dispatch. It looks something like this: // Base struct shared by all subtypes // Plain-old data; can't use virtual functions struct POD { int kind; int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); }; enum Kind { Kind_Derived1, Kind_Derived2, Kind_Derived3 }; struct Derived1: POD { Derived1(): kind(Kind_Derived1) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived2: POD { Derived2(): kind(Kind_Derived2) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived3: POD { Derived3(): kind(Kind_Derived3) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; and then the POD class's function members are implemented like this: int POD::GetFoo() { // Call kind-specific function switch (kind) { case Kind_Derived1: { Derived1 *pDerived1 = static_cast<Derived1*>(this); return pDerived1->GetFoo(); } case Kind_Derived2: { Derived2 *pDerived2 = static_cast<Derived2*>(this); return pDerived2->GetFoo(); } case Kind_Derived3: { Derived3 *pDerived3 = static_cast<Derived3*>(this); return pDerived3->GetFoo(); } default: throw UnknownKindException(kind, "GetFoo"); } } POD::GetBar(), POD::GetBaz(), POD::GetXyzzy(), and other members are implemented similarly. This example is simplified. The actual code has about a dozen different subtypes of POD, and a couple dozen methods. New subtypes of POD and new methods are added pretty frequently, and so every time we do that, we have to update all these switch statements. The typical way to handle this would be to declare the function members virtual in the POD class, but we can't do that because the objects reside in shared memory. There is a lot of code that depends on these structs being plain-old-data, so even if I could figure out some way to have virtual functions in shared-memory objects, I wouldn't want to do that. So, I'm looking for suggestions as to the best way to clean this up so that all the knowledge of how to call the subtype methods is centralized in one place, rather than scattered among a couple dozen switch statements in a couple dozen functions. What occurs to me is that I can create some sort of adapter class that wraps a POD and uses templates to minimize the redundancy. But before I start down that path, I'd like to know how others have dealt with this.

    Read the article

  • Generating Unordered List with PHP + CodeIgniter from a MySQL Database

    - by Tim
    Hello Everyone, I am trying to build a dynamically generated unordered list in the following format using PHP. I am using CodeIgniter but it can just be normal php. This is the end output I need to achieve. <ul id="categories" class="menu"> <li rel="1"> Arts &amp; Humanities <ul> <li rel="2"> Photography <ul> <li rel="3"> 3D </li> <li rel="4"> Digital </li> </ul> </li> <li rel="5"> History </li> <li rel="6"> Literature </li> </ul> </li> <li rel="7"> Business &amp; Economy </li> <li rel="8"> Computers &amp; Internet </li> <li rel="9"> Education </li> <li rel="11"> Entertainment <ul> <li rel="12"> Movies </li> <li rel="13"> TV Shows </li> <li rel="14"> Music </li> <li rel="15"> Humor </li> </ul> </li> <li rel="10"> Health </li> And here is my SQL that I have to work with. -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `id` mediumint(8) NOT NULL auto_increment, `dd_id` mediumint(8) NOT NULL, `parent_id` mediumint(8) NOT NULL, `cat_name` varchar(256) NOT NULL, `cat_order` smallint(4) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; So I know that I am going to need at least 1 foreach loop to generate the first level of categories. What I don't know is how to iterate inside each loop and check for parents and do that in a dynamic way so that there could be an endless tree of children. Thanks for any help you can offer. Tim

    Read the article

  • Is this postgres function cost efficient or still have to clean

    - by kiranking
    There are two tables in postgres db. english_all and english_glob First table contains words like international,confidential,booting,cooler ...etc I have written the function to get the words from english_all then perform for loop for each word to get word list which are not inserted in anglish_glob table. Word list is like I In Int Inte Inter .. b bo boo boot .. c co coo cool etc.. for some reason zwnj(zero-width non-joiner) is added during insertion to english_all table. But in function I am removing that character with regexp_replace. Postgres function for_loop_test is taking two parameter min and max based on that I am selecting words from english_all table. function code is like DECLARE inMinLength ALIAS FOR $1; inMaxLength ALIAS FOR $2; mviews RECORD; outenglishListRow english_word_list;--custom data type eng_id,english_text BEGIN FOR mviews IN SELECT id,english_all_text FROM english_all where wlength between inMinLength and inMaxLength ORDER BY english_all_text limit 30 LOOP FOR i IN 1..char_length(regexp_replace(mviews.english_all_text,'(?)$','')) LOOP FOR outenglishListRow IN SELECT distinct on (regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','')) mviews.id, regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') where regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') not in(select english_glob.english_text from english_glob where i=english_glob.wlength) order by regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') LOOP RETURN NEXT outenglishListRow; END LOOP; END LOOP; END LOOP; END; Once I get the word list I will insert that into another table english_glob. My question is is there any thing I can add to or remove from function to make it more efficient. edit Let assume english_all table have words like footer,settle,question,overflow,database,kingdom If inMinLength = 5 and inmaxLength=7 then in the outer loop footer,settle,kingdom will be selected. For above 3 words inner two loop will apply to get words like f,fo,foo,foot,foote,footer,s,se,set,sett,settl .... etc. In the final process words which are bold will be entered into english_glob with another parameter like 1 to denote it is a proper word and stored in the another filed of english_glob table. Remaining word will be stored with another parameter 0 because in the next call words which are saved in database should not be fetched again. edit2: This is a complete code CREATE TABLE english_all ( id serial NOT NULL, english_all_text text NOT NULL, wlength integer NOT NULL, CONSTRAINT english_all PRIMARY KEY (id), CONSTRAINT english_all_kan_text_uq_id UNIQUE (english_all_text) ) CREATE TABLE english_glob ( id serial NOT NULL, english_text text NOT NULL, is_prop integer default 1, CONSTRAINT english_glob PRIMARY KEY (id), CONSTRAINT english_glob_kan_text_uq_id UNIQUE (english_text) ) insert into english_all(english_text) values ('ant'),('forget'),('forgive'); on function call with parameter 3 and 6 fallowing rows should fetched a an ant f fo for forg forge forget next is insert to another table based on above row insert into english_glob(english_text,is_prop) values ('a',1),('an',1), ('ant',1),('f',0), ('fo',0),('for',1), ('forg',0),('forge',1), ('forget',1), on function call next time with parameter 3 and 7 fallowing rows should fetched.(because f,fo,for,forg are all entered in english_glob table) forgi forgiv forgive

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Java loading user-specified classes at runtime

    - by user349043
    I'm working on robot simulation in Java (a Swing application). I have an abstract class "Robot" from which different types of Robots are derived, e.g. public class StupidRobot extends Robot { int m_stupidness; int m_insanityLevel; ... } public class AngryRobot extends Robot { float m_aggression; ... } As you can see, each Robot subclass has a different set of parameters. What I would like to do is control the simulation setup in the initial UI. Choose the number and type of Robots, give it a name, fill in the parameters etc. This is one of those times where being such a dinosaur programmer, and new to Java, I wonder if there is some higher level stuff/thinking that could help me here. So here is what I've got: (1) User Interface Scrolling list of Robot types on the left. "Add " and "<< Remove" buttons in the middle. Default-named scrolling list of Robots on the right. "Set Parameters" button underneath. (So if you wanted an AngryRobot, you'd select AngryRobot on the left list, click "Add" and "AngryRobot1" would show up on the right.) When selecting a Robot on the right, click "Set Parameters..." button which would call yet another model dialog where you'd fill in the parameters. Different dialog called for each Robot type. (2) Data structures an implementation As an end-product I think a HashMap would be most convenient. The keys would be Robot types and the accompanying object would be all of the parameters. The initializer could just retrieve each item one and a time and instantiate. Here's what the data structures would look like: enum ROBOT_TYPE {STUPID, ANGRY, etc} public class RobotInitializer { public ROBOT_TYPE m_type; public string m_name; public int[] m_int_params; public float[] m_float_params; etc. The initializer's constructor would create the appropriate length parameter arrays based on the type: public RobotInitializer(ROBOT_TYPE type, int[] int_array, float[] float_array, etc){ switch (type){ case STUPID: m_int_params = new int[STUPID_INT_PARAM_LENGTH]; System.arraycopy(int_array,0,m_int_params,0,STUPID_INT_PARAM_LENGTH); etc. Once all the RobotInitializers are instantiated, they are added to the HashMap. Iterating through the HashMap, the simulation initializer takes items from the Hashmap and instantiates the appropriate Robots. Is this reasonable? If not, how can it be improved? Thanks

    Read the article

  • Deleting a node in a family tree

    - by user559142
    Hi, I'm trying to calclulate the best way to delete a node in a family tree. First, a little description of how the app works. My app makes the following assumption: Any node can only have one partner. That means that any child a single node has, it will also be the partner nodes child too. Therefore, step relations, divorces etc aren't compensated for. A node always has two parents - A mother and father cannot be added seperately. If the user doesn't know the details - the nodes attributes are set to a default value. Also any node can add parents, siblings, children to itself. Therefore in law relationships can be added. I have the following classes: FamilyMember String fName; String lName; String dob; String gender; FamilyMember mother, father, partner; ArrayListchildren; int index; int generation; void linkParents(); void linkPartner(); void addChild(); //gets & sets for fields Family ArrayListfamily; void addMember(); void removeMember(); FamilyMember getFamilyMember(index); ArrayListgetFamilyMembers(); FamilyTree Family family; void removeMember(); //need help void displayFamilyMembers(); void addFamilyMember(); void enterDetails(); void displayAncestors(); void displayDescendants(); void printDescendants(); FamilyMember findRootNode(); void sortGenerations(); void getRootGeneration(); I am having trouble with identifying the logic for removing a member. All other functions work fine. Has anyone developed a family tree app before who knows how to deal with removing various different nodes in the family "tree"? e.g. removing a leaf removing a leaf with partner (what if partner has parents etc) removing a parent It seems to be another recursive property but my head is swelling from over thought.

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Question about InputMismatchException while using Scanner

    - by aser
    The question : Input file: customer’s account number, account balance at beginning of month, transaction type (withdrawal, deposit, interest), transaction amount Output: account number, beginning balance, ending balance, total interest paid, total amount deposited, number of deposits, total amount withdrawn, number of withdrawals package sentinel; import java.io.*; import java.util.*; public class Ex7 { /** * @param args the command line arguments */ public static void main(String[] args) throws FileNotFoundException { int AccountNum; double BeginningBalance; double TransactionAmount; int TransactionType; double AmountDeposited=0; int NumberOfDeposits=0; double InterestPaid=0.0; double AmountWithdrawn=0.0; int NumberOfWithdrawals=0; boolean found= false; Scanner inFile = new Scanner(new FileReader("Account.in")); PrintWriter outFile = new PrintWriter("Account.out"); AccountNum = inFile.nextInt(); BeginningBalance= inFile.nextDouble(); while (inFile.hasNext()) { TransactionAmount=inFile.nextDouble(); TransactionType=inFile.nextInt(); outFile.printf("Account Number: %d%n", AccountNum); outFile.printf("Beginning Balance: $%.2f %n",BeginningBalance); outFile.printf("Ending Balance: $%.2f %n",BeginningBalance); outFile.println(); switch (TransactionType) { case '1': // case 1 if we have a Deposite BeginningBalance = BeginningBalance + TransactionAmount; AmountDeposited = AmountDeposited + TransactionAmount; NumberOfDeposits++; outFile.printf("Amount Deposited: $%.2f %n",AmountDeposited); outFile.printf("Number of Deposits: %d%n",NumberOfDeposits); outFile.println(); break; case '2':// case 2 if we have an Interest BeginningBalance = BeginningBalance + TransactionAmount; InterestPaid = InterestPaid + TransactionAmount; outFile.printf("Interest Paid: $%.2f %n",InterestPaid); outFile.println(); break; case '3':// case 3 if we have a Withdraw BeginningBalance = BeginningBalance - TransactionAmount; AmountWithdrawn = AmountWithdrawn + TransactionAmount; NumberOfWithdrawals++; outFile.printf("Amount Withdrawn: $%.2f %n",AmountWithdrawn); outFile.printf("Number of Withdrawals: %d%n",NumberOfWithdrawals); outFile.println(); break; default: System.out.println("Invalid transaction Tybe: " + TransactionType + TransactionAmount); } } inFile.close(); outFile.close(); } } But is gives me this : Exception in thread "main" java.util.InputMismatchException at java.util.Scanner.throwFor(Scanner.java:840) at java.util.Scanner.next(Scanner.java:1461) at java.util.Scanner.nextInt(Scanner.java:2091) at java.util.Scanner.nextInt(Scanner.java:2050) at sentinel.Ex7.main(Ex7.java:36) Java Result: 1

    Read the article

  • manipulate content inserted by ajax, without using the callback

    - by Cody
    I am using ajax to insert a series of informational blocks via a loop. The blocks each have a title, and long description in them that is hidden by default. They function like an accordion, only showing one description at a time amongst all of the blocks. The problem is opening the description on the first block. I would REALLY like to do it with javascript right after the loop that is creating them is done. Is it possible to manipulate elements created ofter an ajax call without using the callback? <!-- example code--> <style> .placeholder, .long_description{ display:none;} </style> </head><body> <script> /* yes, this script is in the body, dont know if it matters */ $(document).ready(function() { $(".placeholder").each(function(){ // Use the divs to get the blocks var blockname = $(this).html(); // the contents if the div is the ID for the ajax POST $.post("/service_app/dyn_block",'form='+blockname, function(data){ var divname = '#div_' + blockname; $(divname).after(data); $(this).setupAccrdFnctly(); //not the actual code }); }); /* THIS LINE IS THE PROBLEM LINE, is it possible to reference the code ajax inserted */ /* Display the long description in the first dyn_block */ $(".dyn_block").first().find(".long_description").addClass('active').slideDown('fast'); }); </script> <!-- These lines are generated by PHP --> <!-- It is POSSIBLE to display the dyn_blocks --> <!-- here but I would really rather not --> <div id="div_servicetype" class="placeholder">servicetype</div> <div id="div_custtype" class="placeholder">custtype</div> <div id="div_custinfo" class="placeholder">custinfo</div> <div id="div_businfo" class="placeholder">businfo</div> </body>

    Read the article

< Previous Page | 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027  | Next Page >