Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 526/646 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Reasons to learn MSIL

    - by mannu
    Hi, Learning MSIL is fun and all that. Understanding what is going on "under the hood" can in many ways improve how you write your code performance-wise. However, the IL that is produced by the compiler is quite verbose and does not tell the whole story since JIT will optimize away a lot of the code. I, personally, have had good use of my very basic IL understanding when I've had to make a small fix in an assembly I do not have the source code for. But, I could as well have used Reflector to generate C# code. I would like to know if you've ever had good use of MSIL understanding and/or why you think it is worth learning it (except for the fun in it, of course). I'd also like to know if you think one should not learn it and why.

    Read the article

  • Using SQL Server for WSS 3.0 instead of Windows Internal database

    - by val
    Hi Folks, There are actually two related questions: is it possible or advisable to use a full blown stand-alone SQL server for SharePoint Services WSS3.0 instead of the supplied windows internal database it comes with? The client I am working for is asking to utilize their existent SQL server for all WSS content databases to possibly minimize admin effort and improve performance. As well, would you advise to install WSS on one physical server and the content database on another server? Any gain in performace? Practicality? ect. The default is WSS and all of its databases are installed on the same single server. We don't really need a farm setup of MOSS, because the WSS capabilities are enough for our needs. Thanks, Val

    Read the article

  • What is the best way for communication between cluster nodes

    - by Tom
    I have an application written in a combination of ASP/VB6/VBScript and ASP.NET/C# that consists of a website part, SOAP-like webservice part and a queue processing part processing incoming files in a hotfolder. We are used to running under load balancers (Microsoft or other make). Often we need to communicate between the different load balanced servers. Currently we do this through the SQL Server database that is common for all nodes, however, this comes with a performance penalty as each message requires a transaction and continual polling from the other nodes. What would be better ways to achieve this? Tom, Appelby

    Read the article

  • What are the pros and cons to keeping SQL in Stored Procs versus Code

    - by Guy
    What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security

    Read the article

  • What is the best solution to do Reporting on Object data for .NET ?

    - by Peter Fox
    Hi, Our projects are using objects as the data source to reports. Our business layer is returning single objects or IEnumerable. Our reports (quite complex) need to display value-type properties of the object, and its related objects. Typical case would be, from a List, display a master report with category data, then a subreport with data for each Product inside each Category, then a subreport for each Part of each Product, and so on. Reporting from the database is not an option for us. We have tried so far - Reporting Services : works but have to mess around with the XML definition of the report to define the datasource classes, very hard to work with if you use an object datasource, architecturally not too clean - Telerik Reports : quite nice (esp., nice architecture) but seems to have problems with complex reports (master/sub), does not give great paging control, rumored to have performance/crash problems (immature product). Does anyone know a good reporting solution that can be integrated in an ASP.NET application and works well with objects as datasources ?

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • What is design principle behind Servlets being Singleton

    - by Sandeep Jindal
    A servlet container "generally" create one instance of a servlet and different threads of the same instance to serve multiple requests. (I know this can be changed using deprecated SingleThreadModel and other features, but this is the usual way). I thought, the simple reason behind this is performance gain, as creating threads is better than creating instances. But it seems this is not the reason. On the other hand, creating instances have little advantage that developers never have to worry about thread safety. I am trying to understand the reason for this decision over the trade-off of thread-safety.

    Read the article

  • Android EditText and addTextChangedListener

    - by Alex
    im currently porting a database manager to android and due to performance reasons i like to update only propertys that have been modified. Im trying to do this with the addTextChangedListener in order to add modified entrys to a List, but my Program never enters any of its methods. EditText Et = (EditText) Editors.get(Prop.Name); Et.addTextChangedListener(new TextWatcher() { @Override public void afterTextChanged(Editable s) { // TODO Auto-generated method stub } @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { // TODO Auto-generated method stub } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { // TODO Auto-generated method stub if(Prop.GetType() == Property.PROPTYPE.num) { float f = Float.parseFloat(s.toString()); Prop.FromString(f); } else { Prop.FromString(s.toString()); } propertiesToUpdate.add(Prop); }); Et.setText(Prop.ToString());

    Read the article

  • Function Object in Java .

    - by Tony
    I wanna implement a javascript like method in java , is this possible ? Say , I have a Person class : public class Person { private String name ; private int age ; // constructor ,accessors are omitted } And a list with Person objects: Person p1 = new Person("Jenny",20); Person p2 = new Person("Kate",22); List<Person> pList = Arrays.asList(new Person[] {p1,p2}); I wanna implement a method like this: modList(pList,new Operation (Person p) { incrementAge(Person p) { p.setAge(p.getAge() + 1)}; }); modList receives two params , one is a list , the other is the "Function object", it loops the list ,and apply this function to every element in the list. In functional programming language,this is easy , I don't know how java do this? Maybe could be done through dynamic proxy, does that have a performance trade off?

    Read the article

  • checking if records exists in DB, in single step or 2 steps?

    - by Sinan
    Suppose you want to get a record from database which returns a large data and requires multiple joins. So my question would be is it better to use a single query to check if data exists and get the result if it exists. Or do a more simple query to check if data exists then id record exists, query once again to get the result knowing that it exists. Example: 3 tables a, b and ab(junction table) select * from from a, b, ab where condition and condition and condition and condition etc... or select id from a, b ab where condition then if exists do the query above. So I don't know if there is any reason to do the second. Any ideas how this affects DB performance or does it matter at all?

    Read the article

  • What Are Basic Tools For A New Project?

    - by Morgan Cheng
    For a long time, I thought that to start a new project we only need 3 basic tools. 1) A Build System (e.g. Maven & CruiseControl) 2) A Version Control System (e.g. CVS & SVN & GIT) 3) A Bug Tracking System (e.g. Bugzilla) Yesterday, a senior guy told me that we need at least one thing more. That is KPI(Key Performance Index). Without KPI, it is impossible to measure whether the project is progressing well or not. KPI is kind of SOFT tool compared to Maven/SVN/Bugzilla. I believe since I missed SOFT tools, there must be some other kind of tools I missed. So, anybody get some ideas what other basic tools necessary for a new project?

    Read the article

  • logging in scala

    - by IttayD
    In Java, the standard idiom for logging is to create a static variable for a logger object and use that in the various methods. In Scala, it looks like the idiom is to create a Logging trait with a logger member and mixin the trait in concrete classes. This means that each time an object is created it calls the logging framework to get a logger and also the object is bigger due to the additional reference. Is there an alternative that allows the ease of use of "with Logging" while still using a per-class logger instance? EDIT: My question is not about how one can write a logging framework in Scala, but rather how to use an existing one (log4j) without incurring an overhead of performance (getting a reference for each instance) or code complexity. Also, yes, I want to use log4j, simply because I'll use 3rd party libraries written in Java that are likely to use log4j.

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • How can I calculate data for a boxplot (quartiles, median) in a Rails app on Heroku? (Heroku uses Po

    - by hadees
    I'm trying to calculate the data needed to generate a box plot which means I need to figure out the 1st and 3rd Quartiles along with the median. I have found some solutions for doing it in Postgresql however they seem to depend on either PL/Python or PL/R which it seems like Heroku does not have either enabled for their postgresql databases. In fact I ran "select lanname from pg_language;" and only got back "internal". I also found some code to do it in pure ruby but that seems somewhat inefficient to me. I'm rather new to Box Plots, Postgresql, and Ruby on Rails so I'm open to suggestions on how I should handle this. There is a possibility to have a lot of data which is why I'm concerned with performance however if the solution ends up being too complex I may just do it in ruby and if my application gets big enough to warrant it get my own Postgresql I can host somewhere else. *note: since I was only able to post one link, cause I'm new, I decided to share a pastie with some relevant information

    Read the article

  • What is the best possible technology for pulling huge data from 4 remote servers

    - by Habib Ullah Bahar
    Hello, For one of our project, we need to pull huge real time stock data from 4 remote servers across two countries. The trivial process here, check the sources for a regular interval and save the update to database. But as these are real time stock data of more than 1000 companies, I have to pull every second, which isn't good in case of memory, bandwidth I think. Please give me suggestion on which technology/platform [We are flexible here. PHP, Python, Java, PERL - anyone of them will be OK for us] we should choose, it can be achieved easily and with better performance.

    Read the article

  • Database EAV model, record listing as per search

    - by Shyam Sunder Verma
    I am building a dynamic application. I have three tables : ( EAV model style) 1: Items ( ItemId, ItemName) 2: Fields (FieldId, FieldName) 3: Field Values ( ItemID, FieldId, Value) Can you tell me how to write SINGLE query to get starting 20 records from ALL items where FieldId=4 is equal to TRUE. Expected Result : Columns = ItemID | Name | Field1 | Field2 | Field3 Each Row= ItemId | ItemName| Value1 | Value2 | Value3 Important concerns : 1: Number of fields per item are not known 2: I need one to write ONE query. 3: Query will be running on 100K records, so performance is concern. 4: I am using MySQL 5.0, so need solution for MYSQL Should I denormalize the tables if above query is not possible at all ? Any advice ?

    Read the article

  • Linux, how to capture screen, and simulate mouse movements.

    - by 2di
    Hi All I need to capture screen (as print screen) in the way so I can access pixel color data, to do some image recognition, after that I will need to generate mouse events on the screen such as left click, drag and drop (moving mouse while button is pressed, and then release it). Once its done, image will be deleted. Note: I need to capture whole screen everything that user can see, and I need to simulate clicks outside window of my program (if it makes any difference) Spec: Linux ubuntu Language: C++ Performance is not very important,"print screen" function will be executed once every ~10 sec. Duration of the process can be up to 24 hours so method needs to be stable and memory leaks free (as usuall :) I was able to do in windows with win GDI and some windows events, but I'ev no idea how to do it in Linux. Thanks a lot

    Read the article

  • How to diganose an unlogged Error 500 on Apache?

    - by samuel morhaim
    We are running a very simple Symfony script, that randomly returns an Error 500. The system administrator says he can't find any trace of an Error 500 on the error logs, however using Curl or Firebug, it is obvious that an Error 500 is being returned. The script simply parses a POST request submitted to an URL on our server. We already checked for performance, memory etc but nothing seems to be the problem. How is this possible? We already enabled all debugging, logging, on Apache, PHP, etc and nothing.

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

  • Checking whether an object exists in StructureMap container

    - by Kevin Pang
    I'm using StructureMap to handle the creation of NHibernate's ISessionFactory and ISession. I've scoped ISessionFactory as a singleton so that it's only created once for my web app and I've scoped ISession as a hybrid so that it will only be opened once per web request. I want to make sure that at the end of each web request, I properly dispose of ISession if it was created for that web request. I figured I could put some code in my Application_EndRequest routine to first check if an ISession was created, and if so, call ISession.Dispose. My current workaround is to just open up an ISession on Application_BeginRequest then dispose of it on Application_EndRequest, but that seems somewhat wasteful in that static file requests for images and css files and whatnot will create an ISession without ever using it. I know that the overall performance hit is negligable since ISessions are very lightweight, but it's getting annoying seeing all those ISessions being created inside NHProf.

    Read the article

  • How to call c# server side method through javascript

    - by Nimesh
    Hi, I need to call a c# server method through the javascript. I have a gridview in which i have a column with dropdown list. When i change the dropdown's value i need to call a server side method through javascript and change the value of another text box in the gridview. I am able to do this on the selected index change. but i am slightly worried about the performance. i am using asp.net c#. Please let me know how to do this.

    Read the article

  • Most Efficient Alternative Method of Storing Settings for iPhone Apps

    - by JPK
    I am not using the Settings bundle to store the settings for my app, as I prefer to allow the user to access the settings within the app (they may be changed fairly often). I do realize that there is the option to do both, but for now, I am trying to find the most optimal place to store the settings within the app. I have a good number of settings (from what I have read, probably too many for NSUserDefaults), and the two main options I am considering are: 1) storing the settings in a dictionary in the plist, loading the settings into a NSDictionary property in the app delegate and accessing them via the sharedDelegate 2) storing the settings in a Core Data entity (1 row on Settings entity), loading the settings into a Settings object in the app delegate and accessing them via the sharedDelegate Of these two, which would be the optimal method, performance wise?

    Read the article

  • Finding the Formula for a Curve

    - by Mystagogue
    Is there a program that will take "response curve" values from me, and provide a formula that approximates the response curve? It would be cool if such a program would take a numeric "percent correct" (perhaps with a standard deviation) so that it returns simplified formulas when laxity is permissable, and more precise (viz. complex) formulas when the curve needs to be approximated closely. My interest is to play with the response curve values and "laxity" factor, until such a tool spits out a curve-fit formula simple enough that I know it will be high performance during machine computations.

    Read the article

  • Python urllib3 and how to handle cookie support?

    - by bigredbob
    So I'm looking into urllib3 because it has connection pooling and is thread safe (so performance is better, especially for crawling), but the documentation is... minimal to say the least. urllib2 has build_opener so something like: #!/usr/bin/python import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") But urllib3 has no build_opener method, so the only way I have figured out so far is to manually put it in the header: #!/usr/bin/python import urllib3 http_pool = urllib3.connection_from_url("http://example.com") myheaders = {'Cookie':'some cookie data'} r = http_pool.get_url("http://example.org/", headers=myheaders) But I am hoping there is a better way and that one of you can tell me what it is. Also can someone tag this with "urllib3" please.

    Read the article

  • [C#] Async threaded tcp server

    - by mark_dj
    I want to create a high performance server in C# which could take about ~10k clients. Now i started writing a TcpServer with C# and for each client-connection i open a new thread. I also use one thread to accept the connections. So far so good, works fine. The server has to deserialize AMF incoming objects do some logic ( like saving the position of a player ) and send some object back ( serializing objects ). I am not worried about the serializing/deserializing part atm. My main concern is that I will have a lot of threads with 10k clients and i've read somewhere that an OS can only hold like a few hunderd threads. Are there any sources/articles available on writing a decent async threaded server ? Are there other possibilties or will 10k threads work fine ? I've looked on google, but i couldn't find much info about design patterns or ways which explain it clearly

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >