Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 182/572 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Prevent 'Run-time error '7' out of memory' error in Excel when using macro

    - by MasterJedi
    I keep getting this error whenever I run a macro in my excel file. Is there any way I can prevent this? My code is below. Debugging highlights the following line as the issue: ActiveSheet.Shapes.SelectAll My macro: Private Sub Save() Dim sh As Worksheet ActiveWorkbook.Sheets("Report").Copy 'Create new workbook with Sheets("Report"(2)) as only sheet. Set sh = ActiveWorkbook.Sheets(1) 'Set the new sheet to a variable. New workbook is now active workbook. sh.Name = sh.Range("B9") & "_" & Format(Date, "mmyyyy") 'Rename the new sheet to B9 value + date. With sh.UsedRange.Cells .Value = .Value 'eliminate all formulas .Validation.Delete 'remove all validation .FormatConditions.Delete 'remove all conditional formatting ActiveSheet.Buttons.Delete ActiveSheet.Shapes.SelectAll Selection.Delete lrow = Range("I" & Rows.Count).End(xlUp).Row 'select rows from bottom up to last containing data in column I Rows(lrow + 1 & ":" & Rows.Count).Delete 'delete rows with no data in column I Application.ScreenUpdating = False .Range("A410:XFD1048576").Delete Shift:=xlUp 'delete all cells outwith report range Application.ScreenUpdating = True Dim counter Dim nameCount nameCount = ActiveWorkbook.Names.Count counter = nameCount Do While counter > 0 ActiveWorkbook.Names(counter).Delete counter = counter - 1 Loop 'remove named ranges from workbook End With ActiveWorkbook.SaveAs "\\Marko\Report\" & sh.Name & ".xlsx" 'Save new workbook using same name as new sheet. ActiveWorkbook.Close False 'Close the new workbook. MsgBox ("Export complete. Choose the next ADP in cell B9 and click 'Calculate'.") 'Display message box to inform user that report has been saved. End Sub Not sure how to make this more efficient or to prevent this error.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Does mixing Quartz and OpenGL-ES cause big performance degrade??

    - by Eonil
    I have a plan to make a game using OpenGL for 3D world view, and CALayer(or UIView) for HUD UI. It's easy to imagine performance degrade from mixing them, but the document which mention this impact disappeared: http://developer.apple.com/iphone/library/technotes/tn2008/tn2230.html I cannot find the document on current version of SDK reference. And I got this document: http://gamesfromwithin.com/gdc-2010-the-best-of-both-worlds-using-uikit-with-opengl If you experienced about this, please let me know about performance impact on current SDK.

    Read the article

  • What are the performance implications of wildcard mapping all requests through IIS 6.0?

    - by slolife
    I am interested in using UrlRewriter.NET and noticed in the config page for IIS 6.0 on Win2k3, that they say to map all requests through the ASP.NET ISAPI. That's fine, but I am wondering if anyone has good or bad things to say about this performance wise? Is my web server going to be dragged down to its knees by doing this or will it be more of a small step up in server load? My server currently has room to breathe now, so some performance hit is expected and acceptable.

    Read the article

  • Performance Related features for migration from .net 2003 Framework 1.1 to .net 2008 framework 3.5?

    - by KuldipMCA
    I am work on VB.net 2003 Framework 1.1 for last 3.5 years in windows Application. We are currently migrating to VB.net 2008 framework 3.5, but i don't know about the features which related to ADO.net and which is important to performance. I know linq to SQL but our architecture is made in .net 2003 so we should follow this. Any features which is very important to enhance the performance?

    Read the article

  • How can I set a counter column value in MySQL?

    - by Jon Tackabury
    I have a table with a "SortID" column that is numbered using consecutive numbers. Whenever a row is deleted, it leaves a gap. Is there a way using pure SQL to update the rows with their row number? Something like this: UPDATE tbl SET SortID={rowindex} ORDER BY SortID (I realize this isn't valid SQL, that's why I'm asking for help) This should set the first row to #1, the second row to #2... etc. Is this possible using SQL? Please forgive the poorly worded question, I'm not really sure the best way to ask this. :)

    Read the article

  • Why does Scala apply thunks automatically, sometimes?

    - by Anonymouse
    At just after 2:40 in ShadowofCatron's Scala Tutorial 3 video, it's pointed out that the parentheses following the name of a thunk are optional. "Buh?" said my functional programming brain, since the value of a function and the value it evaluates to when applied are completely different things. So I wrote the following to try this out. My thought process is described in the comments. object Main { var counter: Int = 10 def f(): Int = { counter = counter + 1; counter } def runThunk(t: () => Int): Int = { t() } def main(args: Array[String]): Unit = { val a = f() // I expect this to mean "apply f to no args" println(a) // and apparently it does val b = f // I expect this to mean "the value f", a function value println(b) // but it's the value it evaluates to when applied to no args println(b) // and the evaluation happens immediately, not in the call runThunk(b) // This is an error: it's not println doing something funny runThunk(f) // Not an error: seems to be val doing something funny } }   To be clear about the problem, this Scheme program (and the console dump which follows) shows what I expected the Scala program to do. (define counter (list 10)) (define f (lambda () (set-car! counter (+ (car counter) 1)) (car counter))) (define runThunk (lambda (t) (t))) (define main (lambda args (let ((a (f)) (b f)) (display a) (newline) (display b) (newline) (display b) (newline) (runThunk b) (runThunk f)))) > (main) 11 #<procedure:f> #<procedure:f> 13   After coming to this site to ask about this, I came across this answer which told me how to fix the above Scala program: val b = f _ // Hey Scala, I mean f, not f() But the underscore 'hint' is only needed sometimes. When I call runThunk(f), no hint is required. But when I 'alias' f to b with a val then apply it, it doesn't work: the evaluation happens in the val; and even lazy val works this way, so it's not the point of evaluation causing this behaviour.   That all leaves me with the question: Why does Scala sometimes automatically apply thunks when evaluating them? Is it, as I suspect, type inference? And if so, shouldn't a type system stay out of the language's semantics? Is this a good idea? Do Scala programmers apply thunks rather than refer to their values so much more often that making the parens optional is better overall? Examples written using Scala 2.8.0RC3, DrScheme 4.0.1 in R5RS.

    Read the article

  • May we have Ruby and Rails performance statistics? We're persuading the business to use Rails!

    - by thekingoftruth
    We're convincing our Products officer that we want to use JRuby on Rails, and we're having a hard time coming up with some statistics which show that: Coding time is less using Rails vs. say Struts or Zend Framework or what have you. Ruby (and JRuby in particular) performance isn't horrible (anymore). Rails performance isn't bad either. If you can get us some good stats quickly, we might have a chance!

    Read the article

  • How does the event dispatch thread work?

    - by Roman
    With the help of people on stackoverflow I was able to get the following working code of the simples GUI countdown (it just displays a window counting down seconds). My main problem with this code is the invokeLater stuff. As far as I understand the invokeLater send a task to the event dispatching thread (EDT) and then the EDT execute this task whenever it "can" (whatever it means). Is it right? To my understanding the code works like that: In the main method we use invokeLater to show the window (showGUI method). In other words, the code displaying the window will be executed in the EDT. In the main method we also start the counter and the counter (by construction) is executed in another thread (so it is not in the event dispatching thread). Right? The counter is executed in a separate thread and periodically it calls updateGUI. The updateGUI is supposed to update GUI. And GUI is working in the EDT. So, updateGUI should also be executed in the EDT. It is why the code for the updateGUI is inclosed in the invokeLater. Is it right? What is not clear to me is why we call the counter from the EDT. Anyway it is not executed in the EDT. It starts immediately a new thread and the counter is executed there. So, why we cannot call the counter in the main method after the invokeLater block? import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; public class CountdownNew { static JLabel label; // Method which defines the appearance of the window. public static void showGUI() { JFrame frame = new JFrame("Simple Countdown"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); label = new JLabel("Some Text"); frame.add(label); frame.pack(); frame.setVisible(true); } // Define a new thread in which the countdown is counting down. public static Thread counter = new Thread() { public void run() { for (int i=10; i>0; i=i-1) { updateGUI(i,label); try {Thread.sleep(1000);} catch(InterruptedException e) {}; } } }; // A method which updates GUI (sets a new value of JLabel). private static void updateGUI(final int i, final JLabel label) { SwingUtilities.invokeLater( new Runnable() { public void run() { label.setText("You have " + i + " seconds."); } } ); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { showGUI(); counter.start(); } }); } }

    Read the article

  • How to attach boost::shared_ptr (or another smart pointer) to reference counter of object's parent?

    - by Checkers
    I remember encountering this concept before, but can't find it in Google now. If I have an object of type A, which directly embeds an object of type B: class A { B b; }; How can I have a smart pointer to B, e. g. boost::shared_ptr<B>, but use reference count of A? Assume an instance of A itself is heap-allocated I can safely get its shared count using, say, enable_shared_from_this.

    Read the article

  • How can I make my counter look less fake?

    - by Eddy Pronk
    I'm using this bit of code to display the number of users on a site. My customer is complaining it looks fake. Any suggestions? var visitors = 187584; var updateVisitors = function() { visitors++; var vs = visitors.toString(), i = Math.floor(vs.length / 3), l = vs.length % 3; while (i-->0) if (!(l==0&&i==0)) vs = vs.slice(0,i*3+l) + ',' + vs.slice(i*3+l); $('#count').text(vs); setTimeout(updateVisitors, Math.random()*2000); }; setTimeout(updateVisitors, Math.random()*2000);

    Read the article

  • Performance of String literals vs constants for Session[...] dictionary keys

    - by FreshCode
    Session[Constant] vs Session["String Literal"] Performance I'm retrieving user-specific data like ViewData["CartItems"] = Session["CartItems"]; with a string literal for keys on every request. Should I be using constants for this? If yes, how should I go about implementing frequently used string literals and will it significantly affect performance on a high-traffic site? Related question does not address ASP.NET MVC or Session.

    Read the article

  • What is the performance difference between blocks and callbacks?

    - by Don
    One of the things that block objects, introduced in Snow Leopard, are good for is situations that would previously have been handled with callbacks. The syntax is much cleaner for passing context around. However, I haven't seen any information on the performance implications of using blocks in this manner. What, if any, performance pitfalls should I look out for when using blocks, particularly as a replacement for a C-style callback?

    Read the article

  • prints line number in both txtfile and list????

    - by jad
    i have this code which prints the line number in infile but also the linenumber in words what do i do to only print the line number of the txt file next to the words??? d = {} counter = 0 wrongwords = [] for line in infile: infile = line.split() wrongwords.extend(infile) counter += 1 for word in infile: if word not in d: d[word] = [counter] if word in d: d[word].append(counter) for stuff in wrongwords: print(stuff, d[stuff]) the output is : hello [1, 2, 7, 9] # this is printing the linenumber of the txt file hello [1] # this is printing the linenumber of the list words hello [1] what i want is: hello [1, 2, 7, 9]

    Read the article

  • In CQRS (event-sourced), do you need a global sequence counter in the event store?

    - by Jon M
    In trying to get my head around CQRS (and DDD in general) I have come across situations when two events occur on different aggregates but the order of them has domain meaning. If so then they could happen so close together that a timestamp (as used by the sample implementations I have seen) cannot differentiate them, meaning the event store doesn't contain a 'complete' representation of the domain as there is ambiguity over the order in which events occurred. As an example, the domain could fire a CustomerCreatedEvent which applies to the Customer aggregate, and then a CustomerAssignedToAgent event on the Agent aggregate. The CustomerAssignedToAgent event doesn't make sense if it occurs before the CustomerCreatedEvent, but typically both of these might be fired as a result of one operation which makes it likely that the timestamps would effectively be the same. So am I just modelling things badly? Should there ever be a situation where the sequence of events across different aggregates is important? Or should you keep a global sequence number on your event store, so that you can identify the exact sequence in which events occurred?

    Read the article

  • incrementing a table column's data by one || mySql

    - by Praveen Prasad
    iam having a table with columns like id || counter if i do something (some event) i want the counter's value(at a particular id) to increase by one , currently iam doing this : //get current value current_value = select counter from myTable where id='someValue' // increase value current_value++ //update table with current value update myTable set counter=current_value where id='someValue'; currently iam running 2 queries for this, please suggest me some way do it in one step.

    Read the article

  • is there a better way of replacing duplicates in a list (python)

    - by myeu2
    Given a list: l1: ['a', 'b', 'c', 'a', 'a', 'b'] output: ['a', 'b', 'c', 'a'_1, 'a'_2, 'b'_1 ] I created the following code to get the output. Its messyyy.. for index in range(len(l1)): counter = 1 list_of_duplicates_for_item = [dup_index for dup_index, item in enumerate(l1) if item == l1[index] and l1.count(l1[index]) > 1] for dup_index in list_of_duplicates_for_item[1:]: l1[dup_index] = l1[dup_index] + '_' + str(counter) counter = counter + 1 Is there a more pythonic way of doing this? I couldnt find anything on the web.

    Read the article

  • Switching from PHP to Ruby - is it the answer to performance?

    - by Industrial
    Hi everyone, I get more often the answer, when asking performance related stuff regarding PHP applications, that PHP really isn't the language for high-performance applications, and that a compiled language really is the way to go. The only thing holding me back to PHP is that it's what I have learned to work with for some while now and the development is quite rapid. So, is PHP a thing of the past and should be put aside in web applications in favour of Ruby, for instance? Thanks

    Read the article

  • Visual Studio 2010 Service Pack 1 Released

    - by krislankford
    The VS 2010 SP 1 release was simultaneous to the release of TFS 2010 SP1 and includes support for the Project Server Integration Feature Pack and updates to .NET Framework 4.0. The complete Visual Studio SP1 list including Test and Lab Manager: http://support.microsoft.com/kb/983509 The release addresses some of the most requested features from customers of Visual Studio 2010 like better help support IntelliTrace support for 64bit and SharePoint Silverlight 4 Tools in the box unit testing support on .NET 3.5 a new performance wizard for Silverlight Another major addition is the announcement of Unlimited Load Testing for Visual Studio 2010 Ultimate with MSDN Subscribers! The benefits of Visual Studio 2010 Load Test Feature Pack and useful links: Improved Overall Software Quality through Early Lifecycle Performance Testing: Lets you stress test your application early and throughout its development lifecycle with realistically modeled simulated load. By integrating performance validations early into your applications, you can ensure that your solution copes with real-world demands and behaves in a predictable manner, effectively increasing overall software quality. Higher Productivity and Reduced TCO with the Ability to Scale without Incremental Costs: Development teams no longer have to purchase Visual Studio Load Test Virtual User Pack 2010. Download the Visual Studio 2010 Load Test Feature Pack Deployment Guide Get started with stress and performance testing with Visual Studio 2010 Ultimate: Quality Solutions Best Practice: Enabling Performance and Stress Testing throughout the Application Lifecycle Hands-On-Lab: Introduction to Load Testing with ASP.NET Profile in Visual Studio 2010 How-Do-I videos: Use ASP.NET Profiler in Load Tests Use Network Emulation in Load Tests VHD/VPC walkthrough: Getting Started with Load and Performance Testing Best Practice guidance: Visual Studio Performance Testing Quick Reference Guide

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >