Search Results

Search found 17259 results on 691 pages for 'behaviour driven design'.

Page 629/691 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • SQL, MVC, Entity Framework

    - by Anthony
    Hi Im using the above technologies and have ran into what I presume is a design issue I have made. I have an Artwork table in my DB and have been able to add art (I now think of these as Digital Products) to a shopping cart + CartLine table fine. The system I have that adds art to galleries and user accounts etc works fine. Now the client wants to sell T-shirts, Mugs and Pens etc, 'HardwareProducts' so I have created a 'HardwareProducts' table. Now I have two different product types in two tables. I use GUID's as the PK's in both the HardwareProducts table and Artwork table. When a customer adds an item to their cart I store the GUID in the ProductID column in the CartItems table. The issue is the database will not know which table to reference when I bring the LineItem object up through my ORM to the front end. In OOP I can see how you would have a base class of Product, and then a DigitalProduct class and HardwareProduct class drived from it, but how do you model this in SQL Server and the Entity Framework please, or is there another way?

    Read the article

  • 500 Worker Threads, what kind of thread pool?

    - by Submerged
    I am wondering if this is the best way to do this. I have about 500 threads that run indefinitely, but Thread.sleep for a minute when done one cycle of processing. ExecutorService es = Executors.newFixedThreadPool(list.size()+1); for (int i = 0; i < list.size(); i++) { es.execute(coreAppVector.elementAt(i)); //coreAppVector is a vector of extends thread objects } The code that is executing is really simple and basically just this class aThread extends Thread { public void run(){ while(true){ Thread.sleep(ONE_MINUTE); //Lots of computation every minute } } } I do need a separate threads for each running task, so changing the architecture isn't an option. I tried making my threadPool size equal to Runtime.getRuntime().availableProcessors() which attempted to run all 500 threads, but only let 8 (4xhyperthreading) of them execute. The other threads wouldn't surrender and let other threads have their turn. I tried putting in a wait() and notify(), but still no luck. If anyone has a simple example or some tips, I would be grateful! Well, the design is arguably flawed. The threads implement Genetic-Programming or GP, a type of learning algorithm. Each thread analyzes advanced trends makes predictions. If the thread ever completes, the learning is lost. That said, I was hoping that sleep() would allow me to share some of the resources while one thread isn't "learning"

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • Advanced All In One .NET Framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Could splitting the requirements on 3 or 4 existing open source framework be possible ? Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • Refactor throwing not null exception if using a method that has a dependency on a certain contructor

    - by N00b
    In the method below the second constructor accepts a ForumThread object which the IncrementViewCount() method uses. There is a dependency between the method and that particular constructor. Without extracting into a new private method the null check in IncrementViewCount() and LockForumThread() (plus other methods not shown) is there some simpler re-factoring I can do or the implementation of a better design practice for this method to guard against the use of the wrong constructor with these dependent methods? Thank you for any suggestions in advance. private readonly IThread _forumLogic; private readonly ForumThread _ft; public ThreadLogic(IThread forumLogic) : this(forumLogic, null) { } public ThreadLogic(IThread forumLogic, ForumThread ft) { _forumLogic = forumLogic; _ft = ft; } public void Create(ForumThread ft) { _forumLogic.SaveThread(ft); } public void IncrementViewCount() { if (_ft == null) throw new NoNullAllowedException("_ft ForumThread is null; this must be set in the constructor"); lock (_ft) { _ft.ViewCount = _ft.ViewCount + 1; _forumLogic.SaveThread(_ft); } } public void LockForumThread() { if (_ft == null) throw new NoNullAllowedException("_ft ForumThread is null; this must be set in the constructor"); _ft.ThreadLocked = true; _forumLogic.SaveThread(_ft); }

    Read the article

  • BackgroundWorker and foreach loop

    - by tomfox66
    I have to process a loop with backgroundworkers. Before I start a new loop iteration I need to wait until the provious backgroundworker has finished. A while loop inside my foreach loop with isbusy flag doesn's seem like a good idea to me. How should I design this loop so it waits for the bg-worker to end before iterating the loop public void AutoConnect() { string[] HardwareList = new string[] { "d1", "d4", "ds1_2", "ds4_2" }; foreach (string HW in HardwareList) { if (backgroundWorker1.IsBusy != true) { backgroundWorker1.RunWorkerAsync(HW); // Wait here until backgroundWorker1 finished } } } private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; string FileName = e.Argument as string; try { if ((worker.CancellationPending == true)) { e.Cancel = true; } else { // Time consuming operation ParseFile(Filename); } } catch { } } private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e) { label1.Text = e.ProgressPercentage.ToString() + " lines"; } private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { if(e.Cancelled == true) { //this.tbProgress.Text = "Canceled!"; } else if(!(e.Error == null)) { //this.tbProgress.Text = ("Error: " + e.Error.Message); } else { label1.text = "Done!"; } }

    Read the article

  • Is jQuery forcing Adobe ColdFusion to abandon the dead flash product line?

    - by crosenblum
    I have been reading a lot about how flash development/design had died, and as jQuery and in the near future html5 comes out, will this start to push Adobe/Coldfusion away from flash towards less product linking? I mean, I love coldfusion, and want that to continue to grow, however, if Adobe only bought Coldfusion from Macromedia, so they can bundle flash and coldfusion together, does the death of flash mean the death of coldfusion? http://topnews.us/content/221385-jobs-says-adobes-flash-waning-and-had-its-day http://aext.net/2010/03/javascript-jquery-killing-flash-tutorial-jquery-plugin/ I really don't mind if Flash dies, I do mind greatly if coldfusion does. Is the success of Flash linked to Coldfusion? If so, why? or why not? The purpose of this isn't to start some war about flash pro's and con's. I was only worried that Adobe would cause problems for Coldfusion, if flash had some market/financial problems. That was my main concern... And no I am not anti-flash... But my financial sanity depends on Coldfusion being a success, so that is why I stated my question. Because I WANT EVERYONE ELSE'S OPINION OF THIS SITUATION. Thank You.

    Read the article

  • Lost with hibernate - OneToMany resulting in the one being pulled back many times..

    - by Andy
    I have this DB design: CREATE TABLE report ( ID MEDIUMINT PRIMARY KEY NOT NULL AUTO_INCREMENT, user MEDIUMINT NOT NULL, created TIMESTAMP NOT NULL, state INT NOT NULL, FOREIGN KEY (user) REFERENCES user(ID) ON UPDATE CASCADE ON DELETE CASCADE ); CREATE TABLE reportProperties ( ID MEDIUMINT NOT NULL, k VARCHAR(128) NOT NULL, v TEXT NOT NULL, PRIMARY KEY( ID, k ), FOREIGN KEY (ID) REFERENCES report(ID) ON UPDATE CASCADE ON DELETE CASCADE ); and this Hibernate Markup: @Table(name="report") @Entity(name="ReportEntity") public class ReportEntity extends Report{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="ID") private Integer ID; @Column(name="user") private Integer user; @Column(name="created") private Timestamp created; @Column(name="state") private Integer state = ReportState.RUNNING.getLevel(); @OneToMany(mappedBy="pk.ID", fetch=FetchType.EAGER) @JoinColumns( @JoinColumn(name="ID", referencedColumnName="ID") ) @MapKey(name="pk.key") private Map<String, ReportPropertyEntity> reportProperties = new HashMap<String, ReportPropertyEntity>(); } and @Table(name="reportProperties") @Entity(name="ReportPropertyEntity") public class ReportPropertyEntity extends ReportProperty{ @Embeddable public static class ReportPropertyEntityPk implements Serializable{ /** * long#serialVersionUID */ private static final long serialVersionUID = 2545373078182672152L; @Column(name="ID") protected int ID; @Column(name="k") protected String key; } @EmbeddedId protected ReportPropertyEntityPk pk = new ReportPropertyEntityPk(); @Column(name="v") protected String value; } And i have inserted on Report and 4 Properties for that report. Now when i execute this: this.findByCriteria( Order.asc("created"), Restrictions.eq("user", user.getObject(UserField.ID)) ) ); I get back the report 4 times, instead of just the once with a Map with the 4 properties in. I'm not great at Hibernate to be honest, prefer straight SQL but I must learn, but i can't see what it is that is wrong.....? Any suggestions?

    Read the article

  • Understanding many to many relationships and Entity Framework

    - by Anders Svensson
    I'm trying to understand the Entity Framework, and I have a table "Users" and a table "Pages". These are related in a many-to-many relationship with a junction table "UserPages". First of all I'd like to know if I'm designing this relationship correctly using many-to-many: One user can visit multiple pages, and each page can be visited by multiple users..., so am I right in using many2many? Secondly, and more importantly, as I have understood m2m relationships, the User and Page tables should not repeat information. I.e. there should be only one record for each user and each page. But then in the entity framework, how am I able to add new visits to the same page for the same user? That is, I was thinking I could simply use the Count() method on the IEnumerable returned by a LINQ query to get the number of times a user has visited a certain page. But I see no way of doing that. In Linq to Sql I could access the junction table and add records there to reflect added visits to a certain page by a certain user, as many times as necessary. But in the EF I can't access the junction table. I can only go from User to a Pages collection and vice versa. I'm sure I'm misunderstanding relationships or something, but I just can't figure out how to model this. I could always have a Count column in the Page table, but as far as I have understood you're not supposed to design database tables like that, those values should be collected by queries... Please help me understand what I'm doing wrong...

    Read the article

  • Cannot implicity convert type void to System.Threading.Tasks.Task<bool>

    - by sagesky36
    I have a WCF Service that contains the following method. All the methods in the service are asynchrounous and compile just fine. public async Task<Boolean> ValidateRegistrationAsync(String strUserName) { try { using (YeagerTechEntities DbContext = new YeagerTechEntities()) { DbContext.Configuration.ProxyCreationEnabled = false; DbContext.Database.Connection.Open(); var reg = await DbContext.aspnet_Users.FirstOrDefaultAsync(f => f.UserName == strUserName); if (reg != null) return true; else return false; } } catch (Exception) { throw; } } My client application was set to access the WCF service with the check box for the "Allow generation of asynchronous operations" and it generated the proxy just fine. I am receiving the above subject error when trying to call this WCF service method from my client with the following code. Mind you, I know what the error message means, but this is my first time trying to call an asynchronous task in a WCF service from a client. Task<Boolean> blnMbrShip = db.ValidateRegistrationAsync(FormsAuthentication.Decrypt(cn.Value).Name); What do I need to do to properly call the method so the design time compile error disappears? Thanks so much in advance...

    Read the article

  • Images logging as 404s with Zend Framework

    - by azz0r
    Hello, Due to Zend rewriting URL's to module/controller/action, its reporting that images are coming through as a 404. Here is the error: [_requestUri:protected] => /images/Movie/124/thumb/main.png [_baseUrl:protected] => [_basePath:protected] => [_pathInfo:protected] => /images/Movie/124/thumb/main.png [_params:protected] => Array ( [controller] => images [action] => Movie [124] => thumb [module] => default ) [_rawBody:protected] => [_aliases:protected] => Array ( ) [_dispatched:protected] => 1 [_module:protected] => default [_moduleKey:protected] => module [_controller:protected] => images [_controllerKey:protected] => controller [_action:protected] => Movie [_actionKey:protected] => action Here is my HTACCESS SetEnv APPLICATION_ENV development RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] I guess what I need todo is put some sort of image or /design detection so its not routing to index.php Ideas?

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • Adding an ADO.NET Entity Data Model throws build errors

    - by user3726262
    I am using Visual Studio 2013 express. I create a new project and then I add a database to that project. But, when I add an ADO.NET Entity Framework model to that project and then run the program, I get the following four build errors listed below. To try to remedy this myself, I added the namespaces 'System.Data.Entity' and 'System.Data.Entity.Design', but that didn't help. Also, I uninstalled and re-installed the Nuget package. I also uninstalled and re-installed Visual Studio 2013 Express for Windows Desktop. But these measures didn't help the situation either. Please note that I used to use the Entity Data model just fine. But it was around the time that I did a system restore on my computer, and when I updated VS 2013 with an update offered on the start page, and finally, when I signed up for MS Azure, that I started running into the problem described above. Now I would think that uninstalling and reinstalling Visual Studio 2013, and then installing the 'Nuget' Package would solve all problems. What am I missing here? The errors mentioned above are: Error 1 The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 14 30 DataLayer Error 2 The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 16 52 DataLayer Error 3 The type or namespace name 'DbModelBuilder' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 23 49 DataLayer Error 4 The type or namespace name 'DbSet' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 28 16 DataLayer Thank you and I realize that my last attempt at this question was rather rough-draftish, John

    Read the article

  • Multiple on-screen view controllers in iPhone apps

    - by Felixyz
    I'm creating a lot of custom views and controllers in a lot of my apps and so far I've mostly set them up programmatically, with adjustments and instantiations being controlled from plists. However, now I'm transitioning to using Interface Builder as much as possible (wish I had done that before, was always on my back-list). Apple is recommending against having many view controllers being simultaneously active in iPhone apps, with a couple of well-known exceptions. I've never fully understood why it should be bad to have different parts of the interface belong to different controllers, if they have no interdependent functionality at all. Does having multiple controllers risk messing up the responder chain, or is there some other reason that it's not recommended, except for the fact that it's usually not needed? What I want to be able to do is design reusable views and controls in IB, but since a nib is not associated with a view, but with a view controller, it seems I'd have to have different parts of the screen be connected to different controllers. I know it's possible to have other objects than view controllers being instantiated from nibs. Should I look into how to create my own alternative more light-weight controllers (that could be sub-controllers of a UIViewController) which could be instantiated from nibs?

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • Add dynamic charts using ASP.NET CHART CONTROL, c#

    - by dhareni
    I wanted to add dynamic charts in the webpage. It goes like this... I get the start and end date from user and draw separate charts for each date bewteen the start and end date. I get the data from sql database and bind it with the chart like this: SqlConnection UsageLogConn = new SqlConnection(ConfigurationManager.ConnectionStrings["UsageConn"].ConnectionString); UsageLogConn.Open();//open connection string sql = "SELECT v.interval,dateadd(mi,(v.interval-1)*2,'" + startdate + " 00:00:00') as 'intervaltime',COUNT(Datediff(minute,'" + startdate + " 00:00:00',d.DateTime)/2) AS Total FROM usage_internet_intervals v left outer join (select * from Usage_Internet where " + name + " LIKE ('%" + value + "%') and DateTime BETWEEN '" + startdate + " 00:00:00' AND '" + enddate + " 23:59:59') d on v.interval = Datediff(minute,'" + startdate + " 00:00:00',d.DateTime)/2 GROUP BY v.interval,Datediff(minute,'" + startdate + " 00:00:00',d.DateTime)/2 ORDER BY Interval"; SqlCommand cmd = new SqlCommand(sql, UsageLogConn); SqlDataAdapter mySQLadapter = new SqlDataAdapter(cmd); Chart1.DataSource = cmd; // set series members names for the X and Y values Chart1.Series["Series 1"].XValueMember = "intervaltime"; Chart1.Series["Series 1"].YValueMembers = "Total"; UsageLogConn.Close(); // data bind to the selected data source Chart1.DataBind(); cmd.Dispose(); The above code adds only one chart for one date and I have added 'chart1' to design view and its not created dynamic. But I wanted to add more charts dynamic at runtime to the webpage. Can anyone help me with this? I am using VS 2008, ASP.NET 3.5 and the charting lib is: using System.Web.UI.DataVisualization.Charting;

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Help for Platform plus Plugin development by J2ee

    - by echo
    Hi, everybody. I want to develop a monitor platform which can monitor many different subjects such as database, OS, middle ware etc. The system includes two parts: 1. Report center(Show some useful chart and report) 2. Collector(Collect information from monitored subject) For the time being, I just want to monitor some parameters of oracle and Linux. And I will add more monitor subject like MS SqlServer, IBM Db2 ect. later. I need a good architecture so I can easily add new monitor module and not to disturb original framework too much. I want to develop it by the method of “Platform + plugin”. A plugin should have its own UI, setting part and something else. I plan to use struts2 + spring + hibernate to develop report center. I am not very sure how to do design this architecture for I have any “Platform + plugin” experience. I ever googled some information such as OSGI and Portlet. I am not sure which one should be my best choice. Can anyone give me some instruction about it. Your help will be appreciated.

    Read the article

  • Recommend me an architecture for this Facebook application

    - by andybaird
    Firstly, this question is subjective. There is not a right answer for this question and it really depends on what works for you. I'm hoping to use this thread as a breeding ground for ideas. I hope this is acceptable in this medium. I'm working on building a Facebook app that will be replacing an already popular app that gets ~50k hits a day. The original app is using a very typical LAMP setup with help from some Zend libraries for database layer extraction. For the most part the app worked well, except to solve a lot of issues I ended up fragmenting tables to speed things up. As a result, I couldn't do a lot of things with the app that I wanted to (namely any processing using aggregate data that needed to be returned quickly) So I'm starting to design plans for the next version of this application, and I have a whole bunch of new and cool features that I know would choke my current setup. I'm looking for technological recommendations of data storage methods that scale well. The database does not necessarily need to be relational, simple key/value storage would suffice (although at present time I know little to nothing about KV stores) What's your recommendation? How would you tackle this? I'd like to take a completely free approach to this -- although I am most familiar and comfortable using PHP, I want to leave all technical options open.

    Read the article

  • How does static code run with multiple threads?

    - by Krisc
    I was reading http://stackoverflow.com/questions/1511798/threading-from-within-a-class-with-static-and-non-static-methods and I am in a similar situation. I have a static method that pulls data from a resource and creates some runtime objects based on the data. static class Worker{ public static MyObject DoWork(string filename){ MyObject mo = new MyObject(); // ... does some work return mo; } } The method takes awhile (in this case it is reading 5-10mb files) and returns an object. I want to take this method and use it in a multiple thread situation so I can read multiple files at once. Design issues / guidelines aside, how would multiple threads access this code? Let's say I have something like this... class ThreadedWorker { public void Run() { Thread t = new Thread(OnRun); t.Start(); } void OnRun() { MyObject mo = Worker.DoWork("somefilename"); mo.WriteToConsole(); } } Does the static method run for each thread, allowing for parallel execution?

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • Bespoke Development or Leverage SharePoint With Web Parts etc?

    - by Asim
    Hi all, We are currently in the process of drawing up a solution for an existing client, creating a number of eServices. The client currently have MOSS 2007. The proposed solution is to use MOSS as the launching pad for the eServices… The requirement involves drawing up several online forms which provide registration facilities as well as facilitating a workflow of some sort. I have been told that the proposed solution requires complex web forms. Most are complex forms with parent child details that have multiple windows. The proposed solution is to do some bespoke development, developing ASP .NET forms. These forms would be deployed under the _layouts folder of the current MOSS portal, inheriting the master page design on the current site. I have been told that this approach make development and deployment more simple, as well has having ‘complete integration’ with MOSS. My questions are: Is this the best way to leverage SharePoint – it seems like the proposed solution is not leveraging MOSS at all..! I thought perhaps utilizing Web Parts would be better, but I have been told that this is more complex and developing more smarter intuitive UI is more difficult. Is this really the case? If not, what should be the recommended approach? We will be utilizing Ultimus as the workflow engine. However, I have been recommended K2 Workflows. Anyone used both/have any opinions on either? Many thanks in advance! Kind Regards,

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >