Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 303/869 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • Put together tiles in android sdk and use as background

    - by Jon
    In a feeble attempt to learn some Android development am I stuck at graphics. My aim here is pretty simple: Take n small images and build a random image, larger than the screen with possibility to scroll around. Have an animated object move around on it I have looked at the SDK examples, Lunar Lander especially but there are a few things I utterly fail to wrap my head around. I've got a birds view plan (which in my head seems reasonably sane): How do I merge the tiles into one large image? The background is static so I figure I should do like this: Make a 2d array with refs to the tiles Make a large Drawable and draw the tiles on it At init draw this big image as the background At each onDraw redraw the background of the previous spot of the moving object, and the moving object at its new location The problem is the hands on things. I load the small images with "Bitmap img1 = BitmapFactory.decodeResource (res, R.drawable.img1)", but then what? Should I make a canvas and draw the images on it with "canvas.drawBitmap (img1, x, y, null);"? If so how to get a Drawable/Bitmap from that? I'm totally lost here, and would really appreciate some hands on help (I would of course be grateful for general hints as well, but I'm primarily trying to understand the Graphics objects). To make you, dear reader, see my level of confusion will I add my last desperate try: Drawable drawable; Canvas canvas = new Canvas (); Bitmap img1 = BitmapFactory.decodeResource (res, R.drawable.img1); // 50 x 100 px image Bitmap img2 = BitmapFactory.decodeResource (res, R.drawable.img2); // 50 x 100 px image canvas.drawBitmap (img1, 0, 0, null); canvas.drawBitmap (img2, 50, 0, null); drawable.draw (canvas); // obviously wrong as draw == null this.setBackground (drawable); Thanks in advance

    Read the article

  • jquery post and get request different on local intranet and live server

    - by nccsbim071
    Hi, I have been developing an asp.net mvc application where i need to make large amounts of jquery post and get request to call controller methods and get back json result. Everything is working fine. The problem is i had to write different jquery post and get request url on local intranet(deployed by making virtual directory) and live server. the current jquery request url is given as below: $.post("/ProjectsChat/GetMessages", { roomId: 24 },.......... now this format of url for jquery request works fine for live server but not for local intranet. Since on local intranet i have made a virtual directory. It only works when i append the name of the virtual directory like this "$.post("MyProjectVirutalDirName/ProjectsChat..................." I am sure most of you must have come across same problem. now i have made a full project, there are large number of jquery requests made, i want to test the application by deploying on local intranet and fix the bugs. Changing all the jquery requests for local intranet doesn't seem feasible solution to me, i am really in a big problem, i can't deploy the same project on live server just like that and test it there, client will kill me. I need some expert advice. Please help Thanks

    Read the article

  • UIImage - should be loaded with [UIImage imageNamed:@""] or not ?

    - by sagar
    Hello ! every one. I am having number of images with in my application. ( images more than 50 - approximately & it can extend according to client's need ) Each image are very large round about - 1024 x 768 & 150 dpi Now, I have to add all this images in a scroll view & display it. Ok, My question is as follows. According to me there are two options of loading large images imageNamed:@"" load asynchronously when viewDidLoad Called. Which is more preferable ? imgModel.image=[UIImage imageNamed:[dMain valueForKey:@"imgVal"]]; or like this. NSURL *ur=[[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:lblModelName.text ofType:@"png"] isDirectory:NO]; NSURLRequest *req=[NSURLRequest requestWithURL:ur cachePolicy:NSURLCacheStorageNotAllowed timeoutInterval:40]; [ur release]; NSURLConnection *con=[[NSURLConnection alloc] initWithRequest:req delegate:self]; if(con){ myWebData=[[NSMutableData data] retain]; } else { } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [myWebData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [myWebData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [connection release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"ImageView Ref From - %@",imgV); // my image view & set image imgV.image=[UIImage imageWithData:myWebData]; [connection release]; connection=nil; }

    Read the article

  • Encapsulate update method inside of object or have method which accepts an object to update

    - by Tom
    Hi, I actually have 2 questions related to each other: I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list. However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say foreach(MyClass obj in MyClassList) { obj.update(); } What is a better implementation and why? The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static.? Any comments would be greatly appreciated Thanks Thomas

    Read the article

  • how to get the value from a CSS class object in javascript

    - by amit
    I have to select an <a> element from the given class of objects. when i click on the anchor tag in the showcase_URL class, i want the jquery function to get the value from <a> tag. How can this be done? I cannot make this an id as I am running a while loop to construct all the elements in php. there would be multiple objects of this class. there is no definite selector through which I can get the value of the anchor tag. Help would be much appreciated. echo '<div id="content">'; echo '<div class="showcase_data">'; echo '<div class="showcase_HEAD">'.$row->title.'</div>'; echo '<div class="showcase_TYPE">'.$row->type.'</div>'; echo '<div class="showcase_date">&nbsp;&nbsp;'.$row->date.'</div>'; echo '<div class="showcase_THUMB" style="float: left;" ></div>'; echo '<div class="showcase_TEXT">'.$row->details.'</div><br/>'; echo '<div class="showcase_URL"><a class="purl" value='.$row->num.'href="'.$row->url.'">PROJECT URL</a></div>'; echo '</div>'; echo '</div>';

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • Creating nested arrays on the fly

    - by adardesign
    I am trying to do is to loop this HTML and get a nested array of this HTML values that i want to grab. It might look complex at first but is a simple question... This script is just part of a Object containing methods. html <div class="configureData"> <div title="Large"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Medium"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Small"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> </div> javascript // this is part of a script..... parseData:function(dH){ dH.find(".configureData div").each(function(indA, eleA){ colorNSize.tempSizeArray[indA] = [eleA.title,[],[],[],[]] $(eleZ).find("a").each(function(indB, eleB){ colorNSize.tempSizeArray[indA][indB+1] = eleC.title }) }) }, I expect the end array should look like this. [ ["large", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ], ["Medium", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ] ] // and so on....

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • UnauthorizedAccessException in ComRegisterFunction when accessing registry on Win 7 64.

    - by sanbornc
    I have a [ComRegisterFunction] that I am using to register a BHO Internet explorer extension. During registration on 64-bit windows 7 machines, a UnauthorizedAccessException is thrown on the call to subKey.SetValue("NoExplorer", 1). The registry appears to have BHO's located @ \HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\explorer\Browser Helper Objects, however, I get them same exception when trying to register there. Any Help would be appreciated. [ComRegisterFunction] public static void RegisterBho(Type type) { string BhoKeyName= "Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Browser Helper Objects"; RegistryKey registryKey = Registry.LocalMachine.OpenSubKey(BhoKeyName, true) ?? Registry.LocalMachine.CreateSubKey(BhoKeyName); if(registryKey == null) throw new ApplicationException("Unable to register Bho"); registryKey.Flush(); string guid = type.GUID.ToString("B"); RegistryKey subKey = registryKey.OpenSubKey(guid) ?? registryKey.CreateSubKey(guid); if (subKey == null) throw new ApplicationException("Unable to register Bho"); subKey.SetValue("NoExplorer", 1); registryKey.Close(); subKey.Close(); }

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • Drawbacks with using Class Methods in Objective C.

    - by RickiG
    Hi I was wondering if there are any memory/performance drawbacks, or just drawbacks in general, with using Class Methods like: + (void)myClassMethod:(NSString *)param { // much to be done... } or + (NSArray*)myClassMethod:(NSString *)param { // much to be done... return [NSArray autorelease]; } It is convenient placing a lot of functionality in Class Methods, especially in an environment where I have to deal with memory management(iPhone), but there is usually a catch when something is convenient? An example could be a thought up Web Service that consisted of a lot of classes with very simple functionality. i.e. TomorrowsXMLResults; TodaysXMLResults; YesterdaysXMLResults; MondaysXMLResults; TuesdaysXMLResults; . . . n I collect a ton of these in my Web Service Class and just instantiate the web service class and let methods on this class call Class Methods on the 'Results' Classes. The classes are simple but they handle large amount of Xml, instantiate lots of objects etc. I guess I am asking if Class Methods lives or are treated different on the stack and in memory than messages to instantiated objects? Or are they just instantiated and pulled down again behind the scenes and thus, just a way of saving a few lines of code?

    Read the article

  • FluentNHibernate mapping of composite foreign keys

    - by Faron
    I have an existing database schema and wish to replace the custom data access code with Fluent.NHibernate. The database schema cannot be changed since it already exists in a shipping product. And it is preferable if the domain objects did not change or only changed minimally. I am having trouble mapping one unusual schema construct illustrated with the following table structure: CREATE TABLE [Container] ( [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container] PRIMARY KEY ( [ContainerId] ASC ) ) CREATE TABLE [Item] ( [ItemId] [uniqueidentifier] NOT NULL, [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC ) ) CREATE TABLE [Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Item_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [ItemId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item_Property] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Container_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) The existing domain model has the following class structure: The Property class contains other members representing the property's name and value. The ContainerProperty and ItemProperty classes have no additional members. They exist only to identify the owner of the Property. The Container and Item classes have methods that return collections of ContainerProperty and ItemProperty respectively. Additionally, the Container class has a method that returns a collection of all of the Property objects in the object graph. My best guess is that this was either a convenience method or a legacy method that was never removed. The business logic mainly works with Item (as the aggregate root) and only works with a Container when adding or removing Items. I have tried several techniques for mapping this but none work so I won't include them here unless someone asks for them. How would you map this?

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • Events and references pattern

    - by serhio
    In a project I have the following relation between BO and GUI By e.g. G could represent a graphic with time lines, C a TimeLine curve, P - points of that curve and T the time that represents each point. Each GUI object is associated with the BO corresponding object. When T changes GUI P captures the Changed event and changes its location. So, when G should be modified, it modifies internally its objects and as result T changes, P moves and the GuiG visually changes, everything is OK. But there is an inconvenient of this architecture... BO should not be recreated, because this will breack the link between BO and GUIO. In particular, GUI P should always have the same reference of T. If in a business logic I do by e.g. P1.T = new T(this.T + 10) GUI_P1 will not move anymore, because it wait an event from the reference of former P1.T object, that does not belongs to P1 anymore. So the solution was to always modify the existing objects, not to recreate it. But here is an other inconvenient: performance. Say I have a ready newC object that should replace the older one. Instead of doing G1.C = newC I should do foreach T in foreach P in C replace with T from P from newC. Is there an other more optimal way to do it?

    Read the article

  • Having issues with initializing character array

    - by quandrum
    Ok, this is for homework about hashtables, but this is the simple stuff I thought I was able to do from earlier classes, and I'm tearing my hair out. The professor is not being responsive enough, so I thought I'd try here. We have a hashtable of stock objects.The stock objects are created like so: stock("IBM", "International Business Machines", 2573, date(date::MAY, 23, 1967)) my constructor looks like: stock::stock(char const * const symbol, char const * const name, int sharePrice, date priceDate): symbol(NULL), name(NULL), sharePrice(sharePrice), dateOfPrice(priceDate) { setSymbol(symbol); setName(name); } and setSymbol looks like this: (setName is indentical): void stock::setSymbol(const char* symbol) { if (this->symbol) delete [] this->symbol; this->symbol = new char[strlen(symbol)+1]; strcpy(this->symbol,symbol); } and it refuses to allocate on the line this->symbol = new char[strlen(symbol)+1]; with a std::bad_alloc. name and symbol are declared char * name; char * symbol; I feel like this is exactly how I've done it in previous code.I'm sure it's something silly with pointers. Can anyone help?

    Read the article

  • How do you get the viewport scale after pinch/zoom on an iPhone web app?

    - by Loktar
    Does anyone know how to get the size in pixels or scale value of the viewport after a user has pinched or double tapped to zoom in/out on a page in JavaScript? I've tried using window.innerWidth but I've had mixed results. Sometimes it seems to accurately give the number of pixels the viewport is showing, however, if I zoom way in on a page and then do a large pinch to zoom back out, window.innerWidth will be around 600-700 even though it is only showing ~200px of the page. The page is only 400px wide and it didn't show the checkered "you've gone too far" background you see when you zoom out beyond the page size. If I do small pinches to zoom in and out, window.innerWidth appears to work just fine. Unfortunately I can't rely on a user only making small pinch gestures :) I've also tried to use the scale property on the gesture event object, but I've found that unreliable because you don't always know the initial scale when you reload the page or use back/forward buttons to navigate to it even when using the meta tag to specify it. Ultimately, I'm trying to make an app that is aware of when a user is trying to zoom out beyond the maximum zoom level so if there is another way to do this I'm interested in hearing about it :) Here's the code I'm using to get the innerWidth: document.body.addEventListener('gestureend', function (evt) { console.log(window.innerWidth); // inaccurate when doing large pinch gestures }, false); Thanks!

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • What would be the light way to render a JSP page without an App/Web Server

    - by kolrie
    First, some background: I will have to work on code for a JSP that will demand a lot of code fixing and testing. This JSP will receive a structure of given objects, and render it according to a couple of rules. What I would like to do, is to write a "Test Server" that would read some mock data out of a fixtures file, and mock those objects into a factory that would be used by the JSP in question. The target App Server is WebSphere and I would like to code, change, code in order to test appropriate HTML rendering. I have done something similar on the past, but the JSP part was just calling a method on a rendering object, so I created an Ad Hoc HTTP server that would read the fixture files, parse it and render HTML. All I had to do was run it inside RAD, change the HTML code and hit F5. So question pretty much goes down to: Is there any stand alone library or lightweight server (I thought of Jetty) that would take a JSP, and given the correct contexts (Request, Response, Session, etc.) render the proper HTML?

    Read the article

  • How to read data from file(.dat) in append mode

    - by govardhan
    We have an application which requires us to read data from a file (.dat) dynamically using deserialization. We are actually getting first object and it throws null pointer exception and "java.io.StreamCorruptedException:invalid type code:AC" when we are accessing other objects using a "for" loop. File file=null; FileOutputStream fos=null; BufferedOutputStream bos=null; ObjectOutputStream oos=null; try{ file=new File("account4.dat"); fos=new FileOutputStream(file,true); bos=new BufferedOutputStream(fos); oos=new ObjectOutputStream(bos); oos.writeObject(m); System.out.println("object serialized"); amlist=new MemberAccountList(); oos.close(); } catch(Exception ex){ ex.printStackTrace(); } Reading objects: try{ MemberAccount m1; file=new File("account4.dat");//add your code here fis=new FileInputStream(file); bis=new BufferedInputStream(fis); ois=new ObjectInputStream(bis); System.out.println(ois.readObject()); **while(ois.readObject()!=null){ m1=(MemberAccount)ois.readObject(); System.out.println(m1.toString()); }/*mList.addElement(m1);** // Here we have the issue throwing null pointer exception Enumeration elist=mList.elements(); while(elist.hasMoreElements()){ obj=elist.nextElement(); System.out.println(obj.toString()); }*/ } catch(ClassNotFoundException e){ } catch(EOFException e){ System.out.println("end"); } catch(Exception ex){ ex.printStackTrace(); }

    Read the article

  • How can I strip Python logging calls without commenting them out?

    - by cdleary
    Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks). I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so: [Leave ERROR and above] my_module.SomeClass.method_with_lots_of_warn_calls [Leave WARN and above] my_module.SomeOtherClass.method_with_lots_of_info_calls [Leave INFO and above] my_module.SomeWeirdClass.method_with_lots_of_debug_calls Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this? Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)

    Read the article

  • Pointers, am I doing them correctly? Objective-c/cocoa

    - by Chris
    I have this in my @interface struct track currentTrack; struct track previousTrack; int anInt; Since these are not objects, I do not have to have them like int* anInt right? And if setting non-object values like ints, boolean, etc, I do not have to release the old value right (assuming non-GC environment)? The struct contains objects: typedef struct track { NSString* theId; NSString* title; } *track; Am I doing that correctly? Lastly, I access the struct like this: [currentTrack.title ...]; currentTrack.theId = @"asdf"; //LINE 1 I'm also manually managing the memory (from a setter) for the struct like this: [currentTrack.title autorelease]; currentTrack.title = [newTitle retain]; If I'm understanding the garbage collection correctly, I should be able to ditch that and just set it like LINE 1 (above)? Also with garbage collection, I don't need a dealloc method right? If I use garbage collection does this mean it only runs on OS 10.5+? And any other thing I should know before I switch to garbage collected code? Sorry there are so many questions. Very new to objective-c and desktop programming. Thanks

    Read the article

  • php paging class

    - by Stick it to THE MAN
    Can anyone recommend a good PHP paging class? I have searched google, but have not seen anything that matches my requirements. Rather than "rolling my own" (and almost surely reinventing the wheel), I decided to check in here first. First some background: I am developing a website using Symfony 1.3.2 with Propel ORM on Ubuntu 9.10. I am currently using the Propel pager, which is OK, but I recently started using memcache to speed things up a little. At this point, the Propel pager is of little use, as it (AFAIK), only works with Propel objects. What I need is a class th:t meets the following requirents Has clean interface, with separation of concerns, so that the logic to retrieve records from the datasource (e.g. database) is encapsulated in a class (or at least a separate file). Can work with arrays of objects Provides pagination links, and only fetches the data required for the current page. Also, the pagination should 'split' the available page links if there are too many. For example, if there are potentially 1000 possible page links, the pages displayed should be something like FIRST 2,3 ....999 LAST Can return the number of all the records in the table being queried, so that the following links are available FIRST, LAST (this requirement is actually already covered in the previous requirement - but I just wanted to re-emphasise it). Can anyone recommend such a library, if they have used it succesfully in the past? Alternatively, someobe may have 'hacked' (e.g. derived from) the current Propel pager, to get it to do the things I listed about - please let me know.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >