Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 262/3915 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • hiding <div> from vb.net code side

    - by reffe
    i have this code for hiding a table and a cell in aspx, backend vb.net Code - For Each row As HtmlTableRow In tab_a1.Rows If row.ID = "a1" Then For Each cell As HtmlTableCell In row.Cells cell.Visible = (cell.ID = "a1") Next ElseIf row.ID = "b1" Then For Each cell As HtmlTableCell In row.Cells cell.Visible = (cell.ID = "b1") Next Else row.Visible = False End If Next now instead of tables I'm using tags. How can i use similar code and make div's visible and invisible?

    Read the article

  • Source code versioning with comments (organizational practice) - leave or remove?

    - by ADTC
    Before you start admonishing me with "DON'T DO IT," "BAD PRACTICE!" and "Learn to use proper source code control", please hear me out first. I am fully aware that the practice of commenting out old code and leaving it there forever is very bad and I hate such practice myself. But here's the situation I'm in. A few months ago I joined a company as software developer. I had worked in the company for few months as an intern, about a year before joining recently. Our company uses source code version control (CVS) but not properly. Here's what happened both in my internship and my current permanent position. Each time I was assigned to work on a project (legacy, about 8-10 years old). Instead of creating a CVS account and letting me check out code and check in changes, a senior colleague exported the code from CVS, zipped it up and passed it to me. While this colleague checks in all changes in bulk every few weeks, our usual practice is to do fine-grained versioning in the actual source code itself (each file increments in versions independent from the rest). Whenever a change is made to a file, old code is commented out, new code entered below it, and this whole section is marked with a version number. Finally a note about the changes is placed at the top of the file in a section called Modification History. Finally the changed files are placed in a shared folder, ready and waiting for the bulk check-in. /* * Copyright notice blah blah * Some details about file (project name, file name etc) * Modification History: * Date Version Modified By Description * 2012-10-15 1.0 Joey Initial creation * 2012-10-22 1.1 Chandler Replaced old code with new code */ code .... //v1.1 start //old code new code //v1.1 end code .... Now the problem is this. In the project I'm working on, I needed to copy some new source code files from another project (new in the sense that they didn't exist in destination project before). These files have a lot of historical commented out code and comment-based versioning including usually long or very long Modification History section. Since the files are new to this project I decided to clean them up and remove unnecessary code including historical code, and start fresh at version 1.0. (I still have to continue the practice of comment-based versioning despite hating it. And don't ask why not start at version 0.1...) I have done similar something during my internship and no one said anything. My supervisor has seen the work a few times and didn't say I shouldn't do such clean-up (if at all it was noticed). But a same-level colleague saw this and said it's not recommended as it may cause downtime in the future and increase maintenance costs. An example is when changes are made in another project on the original files and these changes need to be propagated to this project. With code files drastically different, it could cause confusion to an employee doing the propagation. It makes sense to me, and is a valid point. I couldn't find any reason to do my clean-up other than the inconvenience of a ridiculously messy code. So, long story short: Given the practice in our company, should I not do such clean-up when copying new files from project to project? Is it better to make changes on the (copy of) original code with full history in comments? Or what justification can I give for doing the clean-up? PS to mods: Hope you allow this question some time even if for any reason you determine it to be unfit in SO. I apologize in advance if anything is inappropriate including tags.

    Read the article

  • Code Design Process?

    - by user156814
    I am going to be working on a project, a web application. I was reading 37signals getting real pamphlet online (http://gettingreal.37signals.com/), and I understand the recommended process to build the entire website. Brainstorm, sketch, HTML, code. They touch on each process lightly, but they never really talk much about the coding process (all they say is to keep code lean). I've been reading about different ways to go about it (top to bottom, bottom to top) but I dont know much about each way. I even read somewhere that one should write tests for the code before they actually write the code??? WHAT? What coding process should one follow when building an application. if its necessary, I'm using PHP and a framework.

    Read the article

  • Redundant code in exception handling

    - by Nicola Leoni
    Hi, I've a recurrent problem, I don't find an elegant solution to avoid the resource cleaning code duplication: resource allocation: try { f() } catch (...) { resource cleaning code; throw; } resource cleaning code; return rc; So, I know I can do a temporary class with cleaning up destructor, but I don't really like it because it breaks the code flow and I need to give the class the reference to the all stack vars to cleanup, the same problem with a function, and I don't figure out how does not exists an elegant solution to this recurring problem.

    Read the article

  • how to run javascript from .NET code ?

    - by dotnetcoder
    I have a webrequest that returns a html response which has form inside with hidden fields with some javascript that submits the form automatically on pageload ( if this was run in a browser). If I save this file as *.html and run this file in browser , the java script code automatically posts the form and the output is excel file. I want to be able to generate this file(excel) from a c# code which is not running in broswer. I tried mocking thr form post but its complicated and has various scenarios based on the original webrequest querystring. any pointers.... i know its not possible to probably run JS code that posts the form - from within c# code but still thought of chekcing if someone has done that.

    Read the article

  • What setting needs to be made to make .Net Automation responsive?

    - by Greg
    Have an app that is looking for application windows being created on the desktop using class Unresponsive { private StructureChangedEventHandler m_UIAeventHandler = new StructureChangedEventHandler(OnStructureChanged); public Unresponsive() { Automation.AddStructureChangedEventHandler(AutomationElement.RootElement, TreeScope.Children, m_UIAeventHandler); } private void OnStructureChanged(object sender, StructureChangedEventArgs e) { Debug.WriteLine("Change event"); } } You can see the same issue using UISpy.exe, selecting the desktop and configuring scope for children and just the structure changed event. The problem I'm trying to resolve is that the events are not raised in a timely manner, there seems to be some grouping/delay which makes the app appear to be non responsive. If you start a new app with 1 window and wait a second you get the event, seems alright. If you start the same app several times without delay (say clicking on quickstart), it's not until all of the instances of the app get 'initialised' by the AutomationProxies that you get the notice for the first app (and in short order the other apps/windows). I've sat watching task manager as each instance of the app starts to grow as it is initialised, waiting until the last app is done and then seeing the events all come in. Similarly any time any apps are starting windows within a timeframe there seems to be some blocking. I can't see how to configure this timeframe, or get each structure changed event to be sent on as soon as it happens. Also, this process of listening for structure changed events seems to be leaking, just by listening there is a leak in native memory. (visible in UISpy and my app)

    Read the article

  • embed js code issue from rails application

    - by Arpit Vaishnav
    I am on the ruby on rails application and trying to embed the code for js , added the whole code for embedding in Text box , But when i copy paste in other blogs where embed is poss , i am not getting the full js work , the code is given below <%= text_field_tag "text"," script src=\"/public/javascripts/calendarview.js\" script src=\"/public/javascripts/calendarview_init.js\" link rel=\"stylesheet\" href=\"/public/stylesheets/calendarview.css\" link rel=\"stylesheet\" href=\"/public/stylesheets/calendarview_init.css\" ",:size = 40 % I have just removed < for let it be seen in the coding window PLease help if poss List item

    Read the article

  • Ideas Related to Subset Sum with 2,3 and more integers

    - by rolandbishop
    I've been struggling with this problem just like everyone else and I'm quite sure there has been more than enough posts to explain this problem. However in terms of understanding it fully, I wanted to share my thoughts and get more efficient solutions from all the great people in here related to Subset Sum problem. I've searched it over the Internet and there is actually a lot sources but I'm really willing to re-implement an algorithm or finding my own in order to understand fully. The key thing I'm struggling with is the efficiency considering the set size will be large. (I do not have a limit, just conceptually large). The two phases I'm trying to implement ideas on is finding two numbers that are equal to given integer T, finding three numbers and eventually K numbers. Some ideas I've though; For the two integer part I'm thing basically sorting the array O(nlogn) and for each element in the array searching for its negative value. (i.e if the array element is 3 searching for -3). Maybe a hash table inclusion could be better, providing a O(1) indexing the element? For the three or more integers I've found an amazing blog post;http://www.skorks.com/2011/02/algorithms-a-dropbox-challenge-and-dynamic-programming/. However even the author itself states that it is not applicable for large numbers. So I was for 2 and 3 and more integers what ideas could be applied for the subset problem. I'm struggling with setting up a dynamic programming method that will be efficient for the large inputs as well.

    Read the article

  • Speeding up PostgreSQL query where data is between two dates

    - by Roger
    I have a large table ( 50m rows) which has some data with an ID and timestamp. I need to query the table to select all rows with a certain ID where the timestamp is between two dates, but it currently takes over 2 minutes on a high end machine. I'd really like to speed it up. I have found this tip which recommends using a spatial index, but the example it gives is for IP addresses. However, the speed increase (436s to 3s) is impressive. How can I use this with timestamps?

    Read the article

  • different levels in code igniter?

    - by ajsie
    im all new to framework. so the structure of code igniter looks like: system system/application the system folder is code igniter's base folder right? so if they in the future releases a new version i just put application in the new system folder and its upgraded right? does this mean that i shouldn't put new files and so on in the system folder? cause some code could be used for other applications and i want to put them under the current application im developing, not inside the application folder. i want my application's classes to extend my base system's classes which in turn extend code igniter's base system class. so there are 3 levels. so how could i accomplish this? where to put the system level between CI and my application?

    Read the article

  • PHP fastest method of reading server response

    - by Peter John
    Hi there, im having some real problems with the lag produced by using fgets to grab the server's response to some batch database calls im making. Im sending through a batch of say, 10,000 calls and ive tracked the lag down to fgets causing the hold up in the speed of my application as the response for each call needs to be grabbed. I have found this thread http://bugs.php.net/bug.php?id=32806 which explains the problem quite well, but hes reading a file, not a server response so fread could be a bit tricky as i could get part of the next line, and extra stuff which i dont want. Any help much appreciated!

    Read the article

  • php mysql database users connection handling

    - by aviv
    What is the best way to handle mysql database users connection in PHP? I have a web server running a PHP application on MySQL. I have created a database user for the application: dbuser1 with limited access - only for query, insert and update tables. No alter table. Now the question is, should i use the same dbuser1 widely in my scripts, so if there are 100 current people using my system and hence 100 scripts running parallel they all connect to the database with the same dbuser1? or should i create a few users and assign each script a different user or load-balance between the dbusers ?

    Read the article

  • Avoid having a huge collection of ids by calling a DAO.getAll()

    - by Michael Bavin
    Instead of returning a List<Long> of ids when calling PersonDao.getAll() we wanted not to have an entire collection of ids in memory. Seems like returning a org.springframework.jdbc.support.rowset.SqlRowSet and iterate over this rowset would not hold every object in memory. The only problem here is i cannot cast this row to my entity. Is there a better way for this?

    Read the article

  • How do i write tasks? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side. That part is really all you need to watch if any). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • How to decide on what hardware to deploy web application

    - by Yuval A
    Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good). How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need? How would you formulate parameters such as concurrent users, simultaneous connections and DB read/write ratio to a decision on how much, and which, hardware you need? Any resources on this issue would be very helpful...

    Read the article

  • Compiling code at runtime, loading into current appdomain.

    - by Richard Friend
    Hi Im compiling some code at runtime then loading the assembly into the current appdomain, however when i then try to do Type.GetType it cant find the type... Here is how i compile the code... public static Assembly CompileCode(string code) { Microsoft.CSharp.CSharpCodeProvider provider = new CSharpCodeProvider(); ICodeCompiler compiler = provider.CreateCompiler(); CompilerParameters compilerparams = new CompilerParameters(); compilerparams.GenerateExecutable = false; compilerparams.GenerateInMemory = false; foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies()) { try { string location = assembly.Location; if (!String.IsNullOrEmpty(location)) { compilerparams.ReferencedAssemblies.Add(location); } } catch (NotSupportedException) { // this happens for dynamic assemblies, so just ignore it. } } CompilerResults results = compiler.CompileAssemblyFromSource(compilerparams, code); if (results.Errors.HasErrors) { StringBuilder errors = new StringBuilder("Compiler Errors :\r\n"); foreach (CompilerError error in results.Errors) { errors.AppendFormat("Line {0},{1}\t: {2}\n", error.Line, error.Column, error.ErrorText); } throw new Exception(errors.ToString()); } else { AppDomain.CurrentDomain.Load(results.CompiledAssembly.GetName()); return results.CompiledAssembly; } } This bit fails after getting the type from the compiled assembly just fine, it does not seem to be able to find it using Type.GetType.... Assembly assem = RuntimeCodeCompiler.CompileCode(code); string typeName = String.Format("Peverel.AppFramework.Web.GenCode.ObjectDataSourceProxy_{0}", safeTypeName); Type t = assem.GetType(typeName); //This works just fine.. Type doesntWork = Type.GetType(t.AssemblyQualifiedName); Type doesntWork2 = Type.GetType(t.Name); ....

    Read the article

  • How to interpret mono profiler results?

    - by Ovidiu Pacurar
    I created a console application in C# and running it on windows/.NET is 5x faster than on linux/mono or windows/mono. The app encodes some binary files into text format(JSON). I profiled the app on linux/mono using: mono --profile=default:stat myconsoleapp.exe Here is the first part of the result: prof counts: total/unmanaged: 32274/25062 23542 72.95 % mono 459 1.42 % System.Decimal:Divide (System.Decimal,System.Decimal) 457 1.42 % System.Decimal:Round (System.Decimal,int,System.MidpointRounding) 411 1.27 % /lib/libz.so.1 262 0.81 % /lib/tls/i686/cmov/libc.so.6(memmove 253 0.78 % System.Decimal:IsZero () 247 0.77 % System.NumberFormatter:Init (string,double,int) 213 0.66 % System.NumberFormatter:AppendDigits (int,int) 72.95 % mono? Are mono internals using 3 quarters of the total execution time?

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • iPhone Image Resources, ICO vs PNG, app bundle filesize

    - by Jasarien
    My application has a collection of around 1940 icons that are used throughout. They're currently in ICO and new images provided to me come in ICO format too. I have noticed that they contain a 16x16 and 32x32 representation of each icon in one file. Each file is roughly 4KB in filesize (as reported by finder, but ls reports that they vary from being ~1000 bytes to 5000 bytes) A very small number of these icons only contain the 32x32 representation, and as a result are only around 700 bytes in size. Currently I am bundling these icons with my application and they are inflating the size of the app a bit more than I would like. Altogether, the images total just about 25.5MB. Xcode must do some kind of compression because the resulting app bundle is about 12.4MB. Compressing this further into a ZIP (as it would be when submitted to the App Store), results in a final file of 5.8MB. I'm aware that the maximum limit for over the air App Store downloads has been raised to 20MB since the introduction of the iPad (I'm not sure if that extends to iPhone apps as well as iPad apps though, if not the limit would be 10MB). My worry is that new icons are going to be added (sometimes up to 10 icons per week), and will continue to inflate the app bundle over time. What is the best way to distribute these icons with my app? Things I've tried and not had much success with: Converting the icons from ICO to PNG: I tried this in the hopes that the pngcrush utility would help out with the filesize. But it appears that it doesn't make much of a difference between a normal PNG and a crushed png (I believe it just optimises the image for display on the iPhone's GPU rather than compress it's size). Also in going from ICO to PNG actually increased the size of the icon file... Zipping the images, and then uncompressing them on first run. While this did reduce the overall image sizes, I found that the effort needed to unzip them, copy them to the documents folder and ensure that duplication doesn't happen on upgrades was too much hassle to be worth the benefit. Also, on original and 3G iPhones unzipping and copying around 25MB of images takes too long and creates a bad experience... Things I've considered but not yet tried: Instead of distributing the icons within the app bundle, host them online, and download each icon on demand (it depends on the user's data as to which icons will actually be displayed and when). Issues with this is that bandwidth costs money, and image downloads will be bandwidth intensive. However, my app currently has a small userbase of around 5,500 users (of which I estimate around 1500 to be active based on Flurry stats), and I have a huge unused bandwidth allowance with my current hosting package. So I'm open to thoughts on how to solve this tricky issue.

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • advantages of Zend_Db_Table vs raw (My)SQL?

    - by sunwukung
    Currently working on a new Zend application and developing the Model. Having worked with Zend_Db_Table before, I opted to replace references in the Model to the Table API with a custom SQL script to take care of data access duties. Now I'm looking at developing a new application/domain model, and I wanted to get some feedback from people re: their experiences with Zend_Db API vs raw SQL, and use cases where it would be preferable to use the API. From a project perspective, the DB platform is unlikely to change from MySQL - so it doesn't need to be particularly abstract - and I assume writing a custom SQL API will be more performant than the assorted classes the Zend DB API requires.

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >