Search Results

Search found 16230 results on 650 pages for 'rest day'.

Page 187/650 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • when to use Hibernate vs. Simple ResultSets for small application

    - by luke
    I just started working on upgrading a small component in a distributed java application. The main application is a rather complicated applet/servlet combo running on JBoss and it extensively uses Hibernate for its DataAccess. The component i am working on however is very a very straightforward data importing service. Basically the workflow is Listen for a network event Parse the data packet, extract a set of identifiers Map the identifier set to a primary key in our database Parse the rest of the packet and insert items in a related table using the foreign key found in step 3 Repeat in the previous version of this component it used a hibernate based DAL, that is no longer usable for a variety of reasons (in particular it is EOL), so I am in charge of replacing the Data Access layer for this component. So on the one hand I think i should use Hibernate because that's what the rest of the application does, but on the other i think i should just use regular java.sql.* classes because my requirements are really straightforward and aren't expected to change any time soon. So my question is (and i understand it is subjective) at what point do you think that the added complexity of using an ORM tool (in terms of configuration, dependencies...) is worth it? UPDATE due to the way the DataAccesLayer for the main application was written (weird dependencies) i cannot easily use it, i would have to implement it myself.

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Creating a Simple C# Wrapper to clean up code

    - by Tangopop
    I have this code: public void Contacts(string domainToBeTested, string[] browserList, string timeOut, int numberOfBrowsers) { verificationErrors = new StringBuilder(); for (int i = 0; i < numberOfBrowsers; i++) { ISelenium selenium = new DefaultSelenium("LMTS10", 4444, browserList[i], domainToBeTested); try { selenium.Start(); selenium.Open(domainToBeTested); selenium.Click("link=Email"); Assert.IsTrue(selenium.IsElementPresent("//div[@id='tabs-2']/p/a/strong")); selenium.Click("link=Address"); Assert.IsTrue(selenium.IsElementPresent("//div[@id='tabs-3']/p/strong")); selenium.Click("link=Telephone"); Assert.IsTrue(selenium.IsElementPresent("//div[@id='tabs-1']/ul/li/strong")); } catch (AssertionException e) { verificationErrors.AppendLine(browserList[i] + " :: " + e.Message); } finally { selenium.Stop(); } } Assert.AreEqual("", verificationErrors.ToString(), verificationErrors.ToString()); } My problem is i would like to make it so that i can use the code surrounding the 'try' many many times in the rest of the code. I think it has something to do with wrappers, but i can't get a simple answer for this from the web. So in simple terms the only piece of this code which changes is the bit between the try {} the rest is standard code that i have currently used over 100 times and is turning out to be a pain to maintain. Hope this is clear, many thanks.

    Read the article

  • Paypal Sandbox 2014 Non US test account Sign Up proper link

    - by Flood Gravemind
    Ok I had a Paypal Sandbox account a year ago. I am developing a new site for a client and when I try to Login it won't recognize my email address. I tried forgot my password forgot password still nothing. So I guessed that maybe due to inactivity for such a long time they may have deleted my account. So then I try to Sign Up for a new one. I entered my details 3 times now and spent 6 hours trying to figure out what is the proper link to do this. Then I went to another Sandbox link which required me to entered a US Zip code and its a dead end. I am not even sure which Paypal account I signed up for those three times. No email nothing at all. I am a Non US developer and the FAQ link for Non US developers just points to their REST API. Can someone please guide me to the proper Paypal Sandbox Setup for Non US developers including the proper sign up links please. And I know Stackoverflow does not like rants but from my experience dealing with Paypal, GTA 6 should make a satirical Paypal company in their next Game with Paypal Developer Rampage mode for the main protagonist who also happens to be a Developer. EDIT: REST API does not include UK :(

    Read the article

  • PHP OOP class Sensative To Counter field name !

    - by Mac Taylor
    hey guys recently i managed to write a class for my stories , and everything is fine , unless counter field that stores page's hits here is my class : class nk_posts { var $data = array(); public function _data() { global $db; $result = $db->sql_query(" SELECT bt_stories.*, bt_tags.*, bt_topics.*, group_concat(DISTINCT bt_tags.tag ) as mytags, group_concat(DISTINCT bt_topics.topicname ) as mytopics FROM bt_stories,bt_tags,bt_topics WHERE CONCAT( ' ', bt_stories.tags, ' ' ) LIKE CONCAT( '%', bt_tags.tid, '%' ) AND CONCAT( ' ', bt_stories.associated, ' ' ) LIKE CONCAT( '%', bt_topics.topicid, '%' ) AND time<=now() AND section='news' AND approved='1' GROUP BY bt_stories.sid ORDER BY bt_stories.sid DESC "); while ($this->data = $db->sql_fetchrow($result)) { $this->sid = $this->data['sid']; $this->title = $this->data['title']; $this->counter = $this->data['counter']; //------Rest of Fields ------ $this->_output(); } } public function _output() { themeindex( $this->sid, $this->title, $this->counter, //------Rest of Fields ------ ); } } problem this class can't show counter filed value , but if i change counter field name to other thing like hit , .. everything goes fine im sure its okay if i write it in normal php mysql way , but i need this to be in OOP way any comment why it's sensitive to counter field name ?!

    Read the article

  • JPA 2.0 Provider Hibernate Spring MVC 3.0

    - by user558019
    Dear All i have very strange problem we are using jpa 2.0 with hibernate and spring 3.0 mvc annotations based Database generated through JPA DDL is true and MySQL as Database; i will provide some refference classes and then my porblem. public abstract class Common implements serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; @ManyToOne @JoinColumn private Address address; //with all getter and setters //as well equal and hashCode } public class Parent extends Common{ private String name; @OneToMany(cascade = {CascadeType.MERGE,CascadeType.PERSIST}, mappedBy = "parent") private List<Child> child; //setters and rest of class } public class child extends Common{ //some properties with getter/setters } public class Address implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; private String street; //rest of class with get/setter } as in code you can see that parents and child classes extends Common class so both have address property and id , the problem occurs when change the address refference in parent class it reflect same change in all child objects in list and if change address refference in child class then on merge it will change address refference of parent as well i am not able to figure out is it is problem of jpa or hibernate or spring thanks in advance

    Read the article

  • learning C++ from java, trying to make a linked list.

    - by kyeana
    I just started learning c++ (coming from java) and am having some serious problems with doing anything :P Currently, i am attempting to make a linked list, but must be doing something stupid cause i keep getting "void value not ignored as it ought to be" compile errors (i have it marked where it is throwing it bellow). If anyone could help me with what im doing wrong, i would be very grateful :) Also, I am not used to having the choice of passing by reference, address, or value, and memory management in general (currently i have all my nodes and the data declared on the heap). If anyone has any general advice for me, i also wouldn't complain :P Key code from LinkedListNode.cpp LinkedListNode::LinkedListNode() { //set next and prev to null pData=0; //data needs to be a pointer so we can set it to null for //for the tail and head. pNext=0; pPrev=0; } /* * Sets the 'next' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetNext(LinkedListNode& _next) { pNext=&_next; } /* * Sets the 'prev' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetPrev(LinkedListNode& _prev) { pPrev=&_prev; } //rest of class Key code from LinkedList.cpp #include "LinkedList.h" LinkedList::LinkedList() { // Set head and tail of linked list. pHead = new LinkedListNode(); pTail = new LinkedListNode(); /* * THIS IS WHERE THE ERRORS ARE. */ *pHead->SetNext(*pTail); *pTail->SetPrev(*pHead); } //rest of class

    Read the article

  • How to make a piece of WPF content take up the entire application window

    - by Bojin Li
    I'm working on an application that contains a number of content areas. I want to implement a behavior such that in response to user input, any of these content areas can be toggled to fit the entire application window, and optionally back to its original position again. I experimented with several approaches and none of them seem optimal for me. Here's what I tried to do: Use the ClipToBoundsProperty on the content I want to make "Full Screen": Doesn't work because only the CanvasPanel seems to fully respect this property. The application need to be localized so I would really like to avoid the CanvasPanel. Use a Grid and collapse the other content areas, such that only the one I want to see is visible, hence taking up the entire screen: This will probably work but doesn't seem easy to implement nor maintain. The "Full Screen" content area could be several levels deep, for example residing inside a Tabcontrol, so I would have to hide the tab headers too etc. Reconstruct the content area in a separate view and display it while hiding the rest: Seems easy enough to do with DataTemplates and my ViewModel objects, but any GUI/View only states are not preserved using this approach. Somehow "lift" the GUI/View I want to "Full Screen" into the separate view and display it while hiding the rest: I don't know how to do this or even if this is possible. Anyway if anyone knows a better approach I would love to know about it. Thanks a lot!

    Read the article

  • Rails from with better url

    - by Sam
    wow, switching to rest is a different paradigm for sure and is mainly a headache right now. view <% form_tag (businesses_path, :method => "get") do %> <%= select_tag :business_category_id, options_for_select(@business_categories.collect {|bc| [bc.name, bc.id ]}.insert(0, ["All Containers", 0]), which_business_category(@business_category) ), { :onchange => "this.form.submit();"} %> <% end %> controller def index @business_categories = BusinessCategory.find(:all) if params[:business_category_id].to_i != 0 @business_category = BusinessCategory.find(params[:business_category_id]) @businesses = @business_category.businesses else @businesses = Business.all end respond_to do |format| format.html # index.html.erb format.xml { render :xml => @businesses } end end routes map.resources What I want to to is get a better url than what this form is presenting which is the following: http://localhost:3000/businesses?business_category_id=1 Without rest I would have do something like http://localhost:3000/business/view/bbq bbq as permalink or I would have done http://localhost:300/business_categories/view/bbq and get the business that are associated with the category but I don't really know the best way of doing this. So the two questions are what is the best logic of finding a business by its categories using the latter form and number two how to get that in a pretty url all through restful routes in rails.

    Read the article

  • What are some ways to accomplish a dynamic array?

    - by Ted
    I'm going to start working on a new game and one of the things I'd like to accomplish is a dynamic array sort of system that would hold map data. The game will be top-down 2d and made with XNA 4.0 and C#. You will begin in a randomized area which will essentially be tile based. As such a 2 dimensional array would be one way to accomplish this by holding numerical values which would correspond to a list of textures and that would be how it would draw this randomly created map. The problem is I would kind of only like to create the area around where you start and they could venture in which ever direction they wanted to. This would mean I'd have to populate the map array with more randomized data in the direction they go. I could make a really large array and use the center of it and the rest would be in anticipation of new content to be made, but that just seems very inefficient. I suppose when they start a new game I could have a one time map creation process that would go through and create a large randomly generated map array, but holding all of in memory at all times seems also inefficient. Perhaps if there was a way that I'd only hold parts of that map data in memory at one time and somehow not hold the rest in memory. In the end I only need to have a chunk of the map somewhat close to them in memory so perhaps some of you might have suggestions on good ways to approach this kind of randomized map and dynamic array problem. It wouldn't need to be a dynamic array type of thing if I made it so that it pulled in map data nearby that is needed and then once off the screen and not needed it could somehow get rid of that memory that way I wouldn't have a huge array taking up a bunch of memory.

    Read the article

  • Windows scheduled task fails to complete with error code 0xc000013a

    - by Brian
    I'm using Windows Server 2003 and have a scheduled task that fails to complete. The task is set to run a Windows Command Script (.cmd) at 3pm each day. The script runs a program that extracts some data from a SQL Server database and uploads that data to an FTP server. The error code displayed in the "Last result" column of the scheduled tasks folder is 0xc000013a. A quick Google search leads to this Microsoft support page that states: The most common "C" error code is "0xC000013A: The application terminated as a result of a CTRL+C". No-one is logged in at the time the task runs, so there's no-one around to press CTRL+C. I'm not sure I understand what is being said here in the Microsoft documentation. I've checked the rudimentary things - the scheduled task is enabled, scheduled to run each day, and pointing to a file that does exist in a valid location. Interestingly, when I run this task manually (either by running the .cmd script from the command line, or by right-clicking the task and clicking "Run") the task completes successfully. What does this error code mean, and how can I get this task to run when I'm not there to force it?

    Read the article

  • Problem with apache + ssl: length mismatch error and ocasional bad request

    - by Ruben Garat
    we migrated a server from slicehost to linode recently, we copied the config from one server to the other. Everything works perfectly except that we get: Occasional errors with "Bad Request", this error is not common, you can use it all day and not see it, and the next day it will happen a lot. apart from that, a lot of the time, event though the request works fine we get some errors. using ssldump we get: New TCP connection #1: myip(39831) <-> develserk(443) 1 1 0.2316 (0.2316) C>S SSLv2 compatible client hello Version 3.1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA SSL2_CK_3DES Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f SSL2_CK_RC2 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 SSL2_CK_RC4 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA SSL2_CK_DES TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 SSL2_CK_RC2_EXPORT40 TLS_RSA_EXPORT_WITH_RC4_40_MD5 SSL2_CK_RC4_EXPORT40 1 2 0.2429 (0.0112) S>C Handshake ServerHello Version 3.1 session_id[32]= 9a 1e ae c4 5f df 99 47 97 40 42 71 97 eb b9 14 96 2d 11 ac c0 00 15 67 4e f3 7d 65 4e c4 30 e9 cipherSuite Unknown value 0x39 compressionMethod NULL 1 3 0.2429 (0.0000) S>C Handshake Certificate 1 4 0.2429 (0.0000) S>C Handshake ServerKeyExchange 1 5 0.2429 (0.0000) S>C Handshake ServerHelloDone 1 6 0.4965 (0.2536) C>S Handshake ClientKeyExchange 1 7 0.4965 (0.0000) C>S ChangeCipherSpec 1 8 0.4965 (0.0000) C>S Handshake 1 9 0.5040 (0.0075) S>C ChangeCipherSpec 1 10 0.5040 (0.0000) S>C Handshake ERROR: Length mismatch from the apache error.log [Fri Aug 27 14:50:05 2010] [debug] ssl_engine_io.c(1892): OpenSSL: I/O error, 5 bytes expected to read on BIO#b80c1e70 [mem: b8100918] the server is ubuntu 10.04.1 the apache version is 2.2.14-5ubuntu8 the openssl version is 0.9.8k-7ubuntu8

    Read the article

  • Is there a good Lotus Notes open-source alternative?

    - by Ben S
    At my work we use Lotus Notes 6.5 for our email, meeting scheduling and instant messaging. I can't stand the horrible UI, buggy meeting scheduling and overall '90s feel when using it and would love to replace it with open-source alternatives. So far I've been able to setup Thunderbird for email, and I should also be able to configure pidgin to do IM, but I can't find any replacement for the meeting scheduling. I need to be able to receive meeting requests and respond to them. I've looked around trying to get the Thunderbird plugin Lightning to manage the scheduling, but everything I've read so far requires me to export .ics files from Lotus Notes or otherwise keep Lotus Notes around for day-to-day activities. I've also looked into using Evolution as the client, but I found even less information for it than I did for Thunderbird. How can I easily send, receive and respond to Lotus Notes meetings using an open-source alternative? Alternatively, if there exists a full drop-in replacement to Lotus Notes I would also consider it. Note: My desktop at work is a Windows XP machine, though I wouldn't be opposed to a solution requiring cygwin at this point. Edit: I have no power over the server. I only want a compatible client.

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Finding cause of TCP retransmission within a LAN

    - by Surreal
    Hello denizens of Server Fault I have an irritating problem with a LAN of about 100 computers, 2 Windows domain servers, and 12 VoIP phones. Since their installation around a year ago, every week or so, we notice a VoIP phone resetting itself - occasionally in the middle of a call. Simultaneously there are often signs of temporary loss of connection on computers: freezes in explorer while accessing network shares, errors in our administration software due to loss of connection to the database server. I have been doing some Wireshark monitoring on the connection between the VoIP PBX and the rest of the network. Wireshark picks up a clump of retransmitted TCP packets at the times when we record phone restarts. The Wireshark log shows about 2 clusters of retransmissions a day ranging from 5 packets to hundreds. Those in each cluster are mainly between the PBX and some set of the VoIP phones, but not always the same set. Often retransmissions at the same time are to phones connected to the same switch, but sometimes retransmissions occur together to phones at opposite ends of the network. There are usually some coincident retransmissions in passing TCP traffic, for example between client machines and the file servers. The spikes in retransmissions and phone resets do not correlate well with when the network is heavily loaded. They seem to occur slightly more during the day, but most in the evening, when traffic should be decreasing. They occur reasonably often late at night when most computers are turned off and traffic should be lowest. Do you have any ideas that might help diagnose the cause of problems like this? One thing I have not yet tried, but should have, is updating the firmware of all the switches.

    Read the article

  • Can't find standalone Chrome Gmail client that I know exists

    - by Carson
    I'm on Windows. A couple years ago when I switched from Outlook to Gmail (Google Apps), Google provided this awesome little standalone gmail client that was just a single-purpose Chrome install. It launched like a normal application, stayed updated when I updated Chrome. It was Chrome in a separate application that launched only gmail, stayed logged in really well, and "felt" like a gmail mail client, with the gmail interface. It had it's own little red envelope icon, it was a windows app. (I remember there was no Mac equivalent.) I found it while looking through the "this is how you get your company to switch to gmail" documentation that Google provided. I just repaved my box and now I'm looking for this thing again, and I had no idea it would be impossible to find. I've spent literally 2 hours looking, searching, googling, etc. I'm losing my mind. Anyone know how I can get my hands on this? I used it all day every day for 2 years, so I know it exists :), but I can not find it. Any assistance would be gratefully received.

    Read the article

  • Windows xp mapped drives disconnect from Windows 7 on Peer to Peer network

    - by Kathryn Codo
    I have a 5 user peer to peer network. 2-XP machines, 3- Windows 7 machines and a Windows 7 machine acting as the file server. I have NO issues with the Windows 7 machines staying connected via the mapped drives to the "server". However, once a day the 2 XP machines will not connect to the Windows 7 machine "server" via the mapped drives. I must restart the "server" in order to see the server and access the files via the mapped drives or Explorer. I have tried the persistent : yes command in NET USE, on the XP machines and also setting a static IP address on the "server". I've turned the firewall off on the "server:, no difference so I turned the firewall back on. No interruption occurs with the internet when this happens, just can't see the server. Again, the Windows 7 users are unaffected. I have gone into the advanced network settings on the "server" and added read/write permissions for Everyone as well as the users of the XP machines. I have double checked that we are all on the same WORKGROUP. I'm at a loss. It seems to happen around mid-day and I've not been able to find any activity that is happening at this time (Like someone plugging in a thumbdrive). Any suggestions would be greatly appreciated as my client is running old XP software he no longer has the disks for and needs for his engineering firm.

    Read the article

  • Syncronization between folders MAC OS Lion

    - by Andre Carvalho
    I have an iMac at home and I use a Macbook pro for work. I also have a time capsule at home containing my main folder with my main files. I use it as a NAS besides the Time Machine backup tool. I have several personal files I need to be accessing both at home and at work. My wife, who works at home, uses sometimes the same .XLS files and .DOC files I might have used during my day at work, away from home. My question is: Is there a software, or tool that a I can use to sync my iMac and my MB Pro folders? Remembering that: There might be a chance that my wife and I have changed the same files during the day, so the files would have to be merged so none of the information added by either me or my wife would be lost. The software/tool that would be installed on the MB Pro would need to mount the Time Capsule volume so it could locate the main folder on it. It has to be done automatically when my MB is at home ( with a schedule option ); I have tested some softwares like synctwofolders and Chronosync but none fulfilled all my needs. The first couldn't mount the Time Capsule Volume and didn't have the many schedule options. I really liked Chronosync, but it doesn't merge the files. When it detects a conflict ( for instance: my wife changed a .DOC file on the iMAC and I changed the same file on the MB it asks you to choose which version you want to keep instead of allowing you simply to merge them ). I don't have much experience with automator or scripts but maybe you can give me a hand with that.

    Read the article

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • How can I restore my original IME Romaji input settings?

    - by JOhn K
    the this one, Japanese IME on Windows: switch back to romaji input method (3) did not help. The problem seems the same. My Vista home premium version PC, I had been using Microsoft IME to use English and Japanese input using romaji henkan for a long time. One day, all of a sudden, first when I started up the PC, it has cap lock indicator ON. So, I press SHIFT key, CAP lock indicator is off!(This I have to do every morning.) Now when I want to type romaji input to change to Japanese, I switch EN English (United States) to "JP Japanese (Japan) and select input to hiragana input. It worked until that day. But now when I set to input romaji for hiragana as I used to do and start typing, then it shows Japanese hiragana directly on the display just as keyboard setting as Japanese ???109???????? as shown in Wikipedia JIS keyboard. And I cannot show hiragana as I wanted ( I can convert to Kanji OK) etc. by hitting space key. But its key board arrangement is what I never learned. Other thing I found is when I hit "`" key, it switches between hiragana and alphabet. When I see Control panel setting it is the same setting as I have seen. Please suggest me a solution to get the original setting for IME input mode as I used to do. John K.

    Read the article

  • Using 1920x1200 mode on SyncMaster T260HD in Linux

    - by dagorym
    I just got a Samsung SyncMaster T260HD monitor. It works straight out of the box with Windows but I can't seem to get it to work with Linux, which is my primary OS for day to day work. The computer boots up but when going into graphical mode on Linux the monitor gives me a "Mode not supported" error and doesn't display anything. I booted up windows and, using PowerStrip, grabbed the exact ModeLine that should be used to get the equivalent setting in Linux and added it to my xorg config file but it doesn't seem to help. the ModeLine is: ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync This is the modeline for the working display settings in windows but it doesn't seem to work in Linux My complete entry in the xorg.conf file for the monitor is Section "Monitor" Identifier "Monitor0" ModelName "SyncMaster" DisplaySize 518 324 HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "dpms" ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync EndSection I'm running Scientific Linux 5.4 (clone of Redhat Enterprise Linux 5.4) but I've tried booting with a recent Linux Mint Distro as well as Ubuntu 9.04 and had the same problem. Any suggestions on other things I should try or might be missing? If anyone's gotten this to work I'd love to know. Thanks.

    Read the article

  • samba4 dc "network location cannot be reached"

    - by mitchell babies peters
    to clear the air centos 6.4? (maybe 6.3) as the server, running samba 4.0.10, trying to add a windows 7 client that has connectivity to the server. this is what windows shouts as me as it mocks my dependence on network infrastructure. "the network location cannot be reached." i have access to the domain contoller (dc) im using the dc as the domain name server (dns) already, and the name is correctly resolving, and it is correctly forwarding outbound traffic. i have nothing but self taught experience with active directory(ad) so if i am missing something obvious, please shout it out, but keep the verbal abuse to a minimum. i checked samba4DC + my error and found nothing relevant to my issue, if i missed something please point me in that direction. the weekend is just starting as i write this so i probably wont be back on to check this post for a day or three, but i might because this mystery is killing me. i followed the samba4 as a dc guide here and i supplimented gaps with this i have tested kerberos, ntp, and set my DC as the clock to sync to in my windows client and it appears to be a very small fraction of a second off so that shouldn't be it. also, firewall and selinux are both off for testing. i have also tried disabling ipv6, and cleared the registry of ipv6 records (allegedly the default samba4 as a DC runs as windows server 2003 which allegedly does not support or tolerate the existence of ipv6, fair warning, i heard this on the internet so it is probably a lie) i have tried a few other things that i have forgotten because i have been doing this for a day and a half now. ideas welcome. suggestions for alternatives are also welcome, as long as they are free. i was given a budget of $0 dollars and told to implement active directory (no prior knowledge of active directory at that point).

    Read the article

  • How do I prevent or override a group policy on Windows 7?

    - by Kevin
    A few months ago my company was purchased by a large corporation. We recently switched our network over to the large corporate network which has more restrictions requirements. One of these is the requirement to use a proxy server for Internet traffic. However, some of our internal servers are not recognized by the corporate DNS, so we need to provide the fully qualified domain name. For W7, we make changes to the Internet Properties for IE8 and Chrome to include our domain name as an exception to the proxy server (e.g., *.foobar.com). The problem is that a group policy that does not include our domain name is continually pushed out to my systems throughout the day. This requires me to make the appropriate changes to the Internet Properties several times a day in order to access our internal servers. Is there a way that I can prevent the group policy from being pushed to my systems or detect when the group policy is pushed and override it? I am an administrator on all of my systems. I do have Firefox installed which is not subject to the same group policy push, but I need to have IE8 and Chrome working.

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • How to populate RRD database with CPU and MEM usage data?

    - by Tomaszs
    I have a Lighttpd server (on Centos) and would like to display 4 graphs: lighttpd traffic, lighttpd requests per second, CPU usage and MEM usage. I've set place for rrd database for lighttpd config like this: rrdtool.binary = "/usr/bin/rrdtool" rrdtool.db-name = "/var/www/lighttpd.rrd" And put into my WWW cgi-bin sh file that gets data from lighttpd RRD file and creates graphs of traffic and requests per second like this: #!/bin/sh RRDTOOL=/usr/bin/rrdtool OUTDIR=//var/www/graphs INFILE=/var/www/lighttpd.rrd OUTPRE=lighttpd-traffic WIDTH=400 HEIGHT=100 DISP="-v bytes --title TrafficWebserver \ DEF:binraw=$INFILE:InOctets:AVERAGE \ DEF:binmaxraw=$INFILE:InOctets:MAX \ DEF:binminraw=$INFILE:InOctets:MIN \ DEF:bout=$INFILE:OutOctets:AVERAGE \ DEF:boutmax=$INFILE:OutOctets:MAX \ DEF:boutmin=$INFILE:OutOctets:MIN \ CDEF:bin=binraw,-1,* \ CDEF:binmax=binmaxraw,-1,* \ CDEF:binmin=binminraw,-1,* \ CDEF:binminmax=binmaxraw,binminraw,- \ CDEF:boutminmax=boutmax,boutmin,- \ AREA:binmin#ffffff: \ STACK:binmax#f00000: \ LINE1:binmin#a0a0a0: \ LINE1:binmax#a0a0a0: \ LINE2:bin#efb71d:incoming \ GPRINT:bin:MIN:%.2lf \ GPRINT:bin:AVERAGE:%.2lf \ GPRINT:bin:MAX:%.2lf \ AREA:boutmin#ffffff: \ STACK:boutminmax#00f000: \ LINE1:boutmin#a0a0a0: \ LINE1:boutmax#a0a0a0: \ LINE2:bout#a0a735:outgoing \ GPRINT:bout:MIN:%.2lf \ GPRINT:bout:AVERAGE:%.2lf \ GPRINT:bout:MAX:%.2lf \ " $RRDTOOL graph $OUTDIR/$OUTPRE-hour.png -a PNG --start -14400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-day.png -a PNG --start -86400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-month.png -a PNG --start -2592000 $DISP -w $WIDTH -h $HEIGHT OUTPRE=lighttpd-requests DISP="-v req --title RequestsperSecond -u 1 \ DEF:req=$INFILE:Requests:AVERAGE \ DEF:reqmax=$INFILE:Requests:MAX \ DEF:reqmin=$INFILE:Requests:MIN \ CDEF:reqminmax=reqmax,reqmin,- \ AREA:reqmin#ffffff: \ STACK:reqminmax#00f000: \ LINE1:reqmin#a0a0a0: \ LINE1:reqmax#a0a0a0: \ LINE2:req#00a735:requests" $RRDTOOL graph $OUTDIR/$OUTPRE-hour.png -a PNG --start -14400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-day.png -a PNG --start -86400 $DISP -w $WIDTH -h $HEIGHT $RRDTOOL graph $OUTDIR/$OUTPRE-month.png -a PNG --start -2592000 $DISP -w $WIDTH -h $HEIGHT Basically it's not my script, i get it from somewhere from the internet. Now i would like to do the same for CPU usage and MEM usage. I don't like to use any additional packages! As you can see lighttpd populates lighttpd.rrd file with traffic data and requests per second. Now i would like to the system to populate second rrd file with CPU and MEM usage, so i can add to sh file code to generate graphs for this data. How can I populate RRD file with CPU and MEM usage data? Please, NO THIRD-PARTY tools !

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >