Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 661/837 | < Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • What messaging technologies in windows-ce for gauranteed msg delivery?

    - by Aidanapword
    All, We are building a windows-ce (6.0R3) based device that requires guaranteed and audit-ready message delivery (including store & forward) up to and down from the cloud. I have been looking for choices beyond: MSMQ a proprietary solution (what our prototype device is using) AMQP (research on using this in our context is now starting) ... are there any others? We will be transporting sensitive data (who isn't?!?!) over a public network, and large scale options are required. Anything running on an embedded device will be performance sensitive too. Thanks! Aidanapword

    Read the article

  • checking if records exists in DB, in single step or 2 steps?

    - by Sinan
    Suppose you want to get a record from database which returns a large data and requires multiple joins. So my question would be is it better to use a single query to check if data exists and get the result if it exists. Or do a more simple query to check if data exists then id record exists, query once again to get the result knowing that it exists. Example: 3 tables a, b and ab(junction table) select * from from a, b, ab where condition and condition and condition and condition etc... or select id from a, b ab where condition then if exists do the query above. So I don't know if there is any reason to do the second. Any ideas how this affects DB performance or does it matter at all?

    Read the article

  • Deleting orphans with JPA

    - by homaxto
    I have a one-to-one relation where I use CascadeType.PERSIST. This has over time build up a huge amount of child records that has not been deleted, to such an extend that it is reflected in the performance. Now I wish to add some code that cleans up the database removing all the child records that are not referenced by a parent. At the moment we are talking 400K+ records, at I need to run the code on all customer installations just to be sure they do not run into the same problem. I think the best solution would be to run a named query (because we support two databases) that deletes the necessary records, and this is where I get into problems, because how should I write it in JPQL? The result I want can be defined like the following sql statement, which unfortunaltely does not run on MySQL. DELETE FROM child c1 WHERE c1.pk NOT IN (SELECT DISTINCT p.pk FROM child c2 JOIN parent p ON p.child = c2.pk);

    Read the article

  • logging in scala

    - by IttayD
    In Java, the standard idiom for logging is to create a static variable for a logger object and use that in the various methods. In Scala, it looks like the idiom is to create a Logging trait with a logger member and mixin the trait in concrete classes. This means that each time an object is created it calls the logging framework to get a logger and also the object is bigger due to the additional reference. Is there an alternative that allows the ease of use of "with Logging" while still using a per-class logger instance? EDIT: My question is not about how one can write a logging framework in Scala, but rather how to use an existing one (log4j) without incurring an overhead of performance (getting a reference for each instance) or code complexity. Also, yes, I want to use log4j, simply because I'll use 3rd party libraries written in Java that are likely to use log4j.

    Read the article

  • How can I calculate data for a boxplot (quartiles, median) in a Rails app on Heroku? (Heroku uses Po

    - by hadees
    I'm trying to calculate the data needed to generate a box plot which means I need to figure out the 1st and 3rd Quartiles along with the median. I have found some solutions for doing it in Postgresql however they seem to depend on either PL/Python or PL/R which it seems like Heroku does not have either enabled for their postgresql databases. In fact I ran "select lanname from pg_language;" and only got back "internal". I also found some code to do it in pure ruby but that seems somewhat inefficient to me. I'm rather new to Box Plots, Postgresql, and Ruby on Rails so I'm open to suggestions on how I should handle this. There is a possibility to have a lot of data which is why I'm concerned with performance however if the solution ends up being too complex I may just do it in ruby and if my application gets big enough to warrant it get my own Postgresql I can host somewhere else. *note: since I was only able to post one link, cause I'm new, I decided to share a pastie with some relevant information

    Read the article

  • Using SQL Server for WSS 3.0 instead of Windows Internal database

    - by val
    Hi Folks, There are actually two related questions: is it possible or advisable to use a full blown stand-alone SQL server for SharePoint Services WSS3.0 instead of the supplied windows internal database it comes with? The client I am working for is asking to utilize their existent SQL server for all WSS content databases to possibly minimize admin effort and improve performance. As well, would you advise to install WSS on one physical server and the content database on another server? Any gain in performace? Practicality? ect. The default is WSS and all of its databases are installed on the same single server. We don't really need a farm setup of MOSS, because the WSS capabilities are enough for our needs. Thanks, Val

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Finding the Formula for a Curve

    - by Mystagogue
    Is there a program that will take "response curve" values from me, and provide a formula that approximates the response curve? It would be cool if such a program would take a numeric "percent correct" (perhaps with a standard deviation) so that it returns simplified formulas when laxity is permissable, and more precise (viz. complex) formulas when the curve needs to be approximated closely. My interest is to play with the response curve values and "laxity" factor, until such a tool spits out a curve-fit formula simple enough that I know it will be high performance during machine computations.

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

  • Best practice for Python Assert

    - by meade
    Is there a performance or code maintenance issue with using assert as part of the standard code instead of using it just for debugging purposes? Is assert x >= 0, 'x is less then zero' and better or worse then if x < 0: raise Exception, 'x is less then zero' Also, is there anyway to set a business rule like if x < 0 raise error that is always checked with out the try, except, finally so, if at anytime throughout the code x is < 0 an error is raised, like if you set assert x < 0 at the start of a function, anywhere within the function where x becomes less then 0 an exception is raised?

    Read the article

  • Python urllib3 and how to handle cookie support?

    - by bigredbob
    So I'm looking into urllib3 because it has connection pooling and is thread safe (so performance is better, especially for crawling), but the documentation is... minimal to say the least. urllib2 has build_opener so something like: #!/usr/bin/python import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") But urllib3 has no build_opener method, so the only way I have figured out so far is to manually put it in the header: #!/usr/bin/python import urllib3 http_pool = urllib3.connection_from_url("http://example.com") myheaders = {'Cookie':'some cookie data'} r = http_pool.get_url("http://example.org/", headers=myheaders) But I am hoping there is a better way and that one of you can tell me what it is. Also can someone tag this with "urllib3" please.

    Read the article

  • Outputing resized animated gif to browser using imagick

    - by Freeman
    I intent to resize an animated gif and outputing it to the browser on-the-fly. My problem is that when I save the resized image it is of good quality, but if I echo it to the browser it is of poor quality and the animation is removed. Here is the code:` header("Content-type:image/gif"); try { /* Read in the animated gif */ $animation = new Imagick("images/nikks.gif"); /*** Loop through the frames ***/ foreach ($animation as $frame) { /*** Thumbnail each frame ***/ $frame->thumbnailImage(200, 200); /*** Set virtual canvas size to 100x100 ***/ $frame->setImagePage(200, 200, 0, 0); } /*** Write image to disk. Notice writeImages instead of writeImage ***/ //$animation->writeImages("images/nikkyo1.gif",true); echo $animation; } catch(Exception $e) { echo $e-getMessage(); } `

    Read the article

  • GWT : Type of Container

    - by moorsu
    I see that there are two ways of transferring objects from server to client Use the same domain object (Contact.java) as used in the service layer. (I do not use hibernate) Use the HashMap to send the domain object field values in the form of Map with the help of BeanUtilsBean class. For multiple objects, use the List. Similary, use the Map to submit form values from client to server Is there any performance advantage for option 1 over 2?. Is there a way to hide the classname/package name that is sent to the browser if we use option 1?. thanks!.

    Read the article

  • PDF text search and split library

    - by Horace Ho
    I am look for a server side PDF library (or command line tool) which can: split a multi-page PDF file into individual PDF files, based on a search result of the PDF file content Examples: Search "Page ???" pattern in text and split the big PDF into 001.pdf, 002,pdf, ... ???.pdf A server program will scan the PDF, look for the search pattern, save the page(s) which match the patten, and save the file in the disk. It will be nice with integration with PHP / Ruby. Command line tool is also acceptable. It will be a server side (linux or win32) batch processing tool. GUI/login is not supported. i18n support will be nice but no required. Thanks~

    Read the article

  • Display Computer Info on C# Web Application

    - by Gene
    I want to build a page for end users to visit (in our MPLS Network) and it show the following information in regards to them: Computer Name OS Disk Space Memory IP Address Active Directory User Name Password Expiration Time (As defined by Global Policy) Maybe a few other things such as Trend Micro Office current version vs. their version, # of MS Updates needed (we utilize WSUS), and a few other things in the future. My question is how would I pull this information from the user when they visit the page? What is the proper function for this? Anyone have examples they wish to share for me to learn by if possible? Thanks so much in advance, Gene

    Read the article

  • How can I dynamically inject code to event handlers in Delphi?

    - by mjustin
    For debugging / performance tests I would like to dynamically add logging code to all event handlers of components of a given type. For example, for all Dataset components located ona TDatamodule, I would like to add some code for the BeforeOpen and the AfterOpen event to store the start and end time and send a line to a logger with the elapsed time in the AfterOpen event. I would prefer to do this dynamically (no component subclassing), so that I can add this to all existing datamodules and forms with minimal effort only when needed. Iterating all components and filtering by their type is easy, but for the components which already have event handlers assigned, I need a way to store the existing event handlers, and assign a new modified event handler which first does the logging and then will invoke the original code which was already present. Is there a design pattern which can be applied, or even some example code which shows how to implement this in Delphi?

    Read the article

  • XmlDocument caching memory usage

    - by mdsharpe
    We are seeing very high memory usage in .NET web applications which use XmlDocument. A small (~5MB) XML document is loaded into an XmlDocument object and stored in HttpContext.Cache for easy querying and XSLT transformation on each page load. The XML is modified on disk periodically so a cache has a dependency on the file. Such an application appears to be using hundreds of megabytes of RAM. I have experimented with requesting garbage collection on each request start, and this keeps the RAM usage far lower but I cannot imagine this is good practise. Does anyone have any suggestions as to how we can achieve the same goal but with lower RAM usage?

    Read the article

  • SELECT INTO or Stored Procedure?

    - by Kerry
    Would this be better as a stored procedure or leave it as is? INSERT INTO `user_permissions` ( `user_id`, `object_id`, `type`, `view`, `add`, `edit`, `delete`, `admin`, `updated_by_user_id` ) SELECT `user_id`, $object_id, '$type', 1, 1, 1, 1, 1, $user_id FROM `user_permissions` WHERE `object_id` = $object_id_2 AND `type` = '$type_2' AND `admin` = 1 You can think of this with different objects, lets say you have groups and subgroups. If someone creates a subgroup, it is making everyone who had access to the parent group now also have access to the subgroup. I've never made a stored procedure before, but this looks like it might be time. This call be probably be called very often. Should I be creating a procedure or will the performance be insignificant?

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

  • Clickone + .net client profile framework + sql compact + offline installation

    - by grimmersnee
    am having issues with prerequisites while trying to publish my windows .net 3.5 application using clickonce. I am wanting my application to work offline as well as online, so I want to include the prerequisites in the installation and not make client download them via the internet. My prerequisites are: .Net Framework Client Profile SQL Compact 3.5 I have downloaded the .Net Framework Client Profile Offline Installer. Installed it and put the DotNetFx35Client.exe in this location: C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\DotNetFx35Client Under the Project - Publish Tab I have checked the "Download prerequisites from the following location" and entered \MachineName\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\DotNetFx35Client Following http://stackoverflow.com/questions/1046370?tab=oldest#tab-top I am however still the error: The install location for prerequisites has not been set to 'component vendo's website' file Dotnetfx35client\Dotnetfx35clientSteup.exe' in item .net framework client profile can not be located on disk.

    Read the article

  • [C#] Async threaded tcp server

    - by mark_dj
    I want to create a high performance server in C# which could take about ~10k clients. Now i started writing a TcpServer with C# and for each client-connection i open a new thread. I also use one thread to accept the connections. So far so good, works fine. The server has to deserialize AMF incoming objects do some logic ( like saving the position of a player ) and send some object back ( serializing objects ). I am not worried about the serializing/deserializing part atm. My main concern is that I will have a lot of threads with 10k clients and i've read somewhere that an OS can only hold like a few hunderd threads. Are there any sources/articles available on writing a decent async threaded server ? Are there other possibilties or will 10k threads work fine ? I've looked on google, but i couldn't find much info about design patterns or ways which explain it clearly

    Read the article

  • How to diganose an unlogged Error 500 on Apache?

    - by samuel morhaim
    We are running a very simple Symfony script, that randomly returns an Error 500. The system administrator says he can't find any trace of an Error 500 on the error logs, however using Curl or Firebug, it is obvious that an Error 500 is being returned. The script simply parses a POST request submitted to an URL on our server. We already checked for performance, memory etc but nothing seems to be the problem. How is this possible? We already enabled all debugging, logging, on Apache, PHP, etc and nothing.

    Read the article

< Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >