Search Results

Search found 9461 results on 379 pages for 'digital signal processing'.

Page 90/379 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Can someone help me install MYSQL server pelase? This is bugging me....

    - by Alex
    $ sudo aptitude install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libhtml-template-perl{a} mysql-server mysql-server-5.0{a} mysql-server-core-5.0{a} 0 packages upgraded, 4 newly installed, 0 to remove and 12 not upgraded. Need to get 0B/27.7MB of archives. After unpacking 91.1MB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package mysql-server-core-5.0. (Reading database ... 17022 files and directories currently installed.) Unpacking mysql-server-core-5.0 (from .../mysql-server-core-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package mysql-server-5.0. Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.30really5.0.75-0ubuntu10.3_all.deb) ... Setting up mysql-server-core-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done Before I installed it, I ran this: sudo aptitude purge mysql-server mysql-server-5.0 This has happened before. I remember before, I did something with dpkg with fixed it.

    Read the article

  • Realtek HD Audio 5.1 Optical Input (from Xbox 360)

    - by Shevek
    I'm trying to connect up my Xbox 360 digital audio to my 5.1 speakers via my PC (my LCD has dual input, DVI for the PC, D-SUB for the Xbox). The motherboard has a Realtek ALC888 chipset and I have a 5.1 speaker system connected via 3 x 3.5mm jacks (FR/FL, RR/RL, C/LFE) and I get full 5.1 output from the PC. I have connected the optical audio cable from the Xbox to the Optical In on the motherboard's backplate. With the Xbox in Digital Stereo mode I get 2 channel audio from the Xbox, through the PC, to the speakers. With the Xbox in Dolby Digital 5.1 mode I get no sound at all. I have the latest Realtek drivers installed in Win 7 32-bit. Questions: Is it possible to use the full 5.1 DD from the Xbox? If so, am I missing some option(s) in the Realtek setup? Do I need some other piece of software to do this? (AC3Filter or FFDShow perhaps) Many thanks

    Read the article

  • Audigy 2 Coaxial to Coaxial/Optical connection possible?

    - by Chris
    Hello, The original question is deleted, and asked again below with accurate information. Edit: Excuse me for my ignorance, my friend has a Logitech Z-5500 set. I thought after comparing those systems on Google images that he had the Z-680, but he hasn't. This set has a single Digital coaxial for DVD or CD players or PC sound cards (requires coaxial cable, sold separately) cable. This single cable was connected to the orange tulip connector (SPDIF coaxial out) on the backside of his onboard HP Elite m9070, this connector is broken. How can I use the digital out with a single cable coaxial cable on the Audigy2 (see image below) (I have the following converters for my disposal, can I use one of these? 3.5 mm male - coax optical mini optical male - toslink optical female 2 x toslink optical female, toslink coupler, optical audio extension note: Is it possible to connect a toslink cable with an mini optical male - toslink converter on the digital out of the Audigy 2? (see image below)

    Read the article

  • Calling the LWRP from the Exception Handler

    - by Sarah Haskins
    Is it possible to call out to a Provider (LWRP) from a Chef Exception Handler? I think my Provider is out of scope, but I don't know if what I am trying to do is possible? or advisable? Here is my provider code (cookbooks/config/provider/signal.rb): action :failure do Chef::Log.info("Yeah success") end Here is my exception handler code (exception_handler/handlers/exceptionHandler.rb): require 'chef/handler' config_signal "signal" do action :nothing end class Chef class Handler class LogCollector < Chef::Handler notifies :failure, resources(:config_signal => signal) end end end Also, if anyone has a good recommendation for general reading about scope in the context of Chef I'd appreciate it.

    Read the article

  • How to split audio into multiple channels from optical S/PDIF or 1/8"?

    - by Josh M.
    I have a motherboard which has an optical S/PDIF output or 1/8". I'd like to "split" that signal into the appropriate channels so that I can then connect that to the wires behind my car's headunit which, in turn, run to the amp. The factory Bose amp just takes a single connector with a million wires running out of it, so that's why I would need to separate the signal into separate channels. On the other end there are four RCA connectors: front left, front right, rear left, rear right. The sub-woofer signal does not require an additional connection. Edit: Revised to include S/PDIF or 1/8".

    Read the article

  • Wireless performance on Ubuntu 9.10

    - by Brian
    Is there something I should do to my networking configuration in Ubuntu to better the performance of my wireless connection? I'm on a netbook dual-booting Windows 7 and Ubuntu 9.10. I pick up much stronger wifi signal when in Windows than Ubuntu. As soon as I boot Ubuntu, it will connect to the network with a stronger signal, and then loses signal very quickly. After it dies, I can't reconnect. I've tested this on a couple of different networks with the same outcome.

    Read the article

  • Extremely poor WiFi reception on new desktop

    - by guy
    I just received a new desktop with a built-in WiFi card (Windows 7 Home Premium x64). However, I am only able to receive any WiFi signal when the computer is within 1-2 meters of a router. Even then, the signal is very weak (Windows 7 says it is "poor"). I have tried this with 2 different routers and see similar results. I have somewhere around a dozen other devices that can connect to both routers with greater signal strength (usually full strength) from much, much further away. When I tried calling support I was instructed to reinstall my drivers and attempt to change the WiFi channel of the router. Neither worked. I was then told that the problem was with my routers and that the computer was functioning normally and that it was certainly not a hardware problem. What would be the cause of this and is there anything I can do to fix it?

    Read the article

  • How to let Linux Python application handle termination on user logout correctly?

    - by tuxpoldo
    I have written a Linux GUI application in Python that needs to do some cleanup tasks before being terminated when the user logs out. Unfortunately it seems, that on logout, all applications are killed. I tried both to handle POSIX signals and DBUS notifications, but nothing worked. Any idea what I could have made wrong? On application startup I register some termination handlers: # create graceful shutdown mechanisms signal.signal(signal.SIGTERM, self.on_signal_term) self.bus = dbus.SessionBus() self.bus.call_on_disconnection(self.on_session_disconnect) When the user logs out, neither self.on_signal_term nor self.on_session_disconnect are called. The problem occurs in several scenarios: Ubuntu 14.04 with Unity, Debian Wheezy with Gnome. Full code: https://github.com/tuxpoldo/btsync-deb/tree/master/btsync-gui

    Read the article

  • PSU aka Power Supply won't turn off and No Power_Good - Safe to keep using?

    - by Tek
    The title is the symptoms of my problem. I mainly chose this title for search engines so people can know why this is happening since I see a lot of uncertainty when it comes to this problem. I do have a question related to the source of the problem though. First of all, Inserting the power to the power supply automatically turns on my computer. Using a power supply tester, the tester automatically turns on without me having to push the button to test the PSU. lol. The PG (Power Good) signal is missing. The strange thing is my computer still turns on (OS boots, etc) considering a missing power good signal. Is it really that unsafe to use the power supply when it's missing the power good signal? All the voltages seem to be in check. Here's a picture: Power Supply Tester Readout And by safe (considering the readout) in the sense that is it likely my components (cpu, mobo, etc) could be damaged?

    Read the article

  • Controlling TV Channel Through Computer

    - by killianmcc
    I'm passing my TV input through my computer so that I can use the video stream in an application I'm creating which then outputs to my TV. E.g. Sky Digibox/FreeView box - Laptop - TV Where on my laptop I'll be using the stream in a WPF application so I can overlay XAML objects onto it. My question is, what would be the best way for me to send the signal back to the box to say change the channel for example? I don't want to have to use the remote, I want the computer to handle everything. Is there a standard cable these boxes have that could take what would normally be a remote control signal and use that as input instead? Or would I have to go down the route of looking at some sort of Infared LED to send the signal recreating the remote? Apologies if this is not clear enough, let me know and I'll try and be more precise.

    Read the article

  • Beginner questions on Java Regular Expression

    - by Robert
    Hello everyone. I began studying Java Regular Expression recently and I found some really intersting task.For example,I now need to dig out "Product Name","Product Description" and "Sellers for this product" out of the following HTML code.(I am sorry for the big chunck of code,but it is very straightforward) <td class="sr-check"> <input type="checkbox" name="cptitle" value="678560038" /></td> <td class="sr-image" style="width: 80px;"><a href="/Nikon-D300S-12-3-678560038/prices-html" class="strictRule" rel="nofollow"><img src="http://img01.static-nextag.com/image/Nikon-D300S-12-3-MP-Digital-SLR-Camera-Body-Black/0/000/006/789/461/678946110.jpg" alt="Nikon D300S 12.3 MP Digital SLR Camera Body - Black" class="imageLink strictRule" height="75" width="75" id="opILink_0" title="Nikon Digital Cameras - Nikon D300S 12.3 MP Digital SLR Camera Body - Black" /></a><div class="breaker">&nbsp;</div></td> <td class="sr-info"> <div class="sr-info"> <a id="opPNLink_0" class="underline" style="font-size:16px" href="/Nikon-D300S-12-3-678560038 /prices-html" >Nikon D300S 12.3 MP <b>Digital</b> SLR <b>Camera</b> Body - Black</a> <div class="sr-subinfo"> <div class="sr-info-description">SLR - 13.1MP, 12.3MP - 1x Optical Zoom - CompactFlash, SD/MMC Memory Card - 3in.</div> <div class="rating"> <img src="http://img01.static-nextag.com/imagefiles/stars/stars4_10px.gif" alt="4/5 stars" title="4/5 stars" /> (92 user ratings)</div> <div style="clear: both;"> <!-- nxtginc=nextag.api.ServerInclude$JSPIncludeWriter(/buyer/ATLSSI.jsp?ptid=678560038&dts=y) --> <a id="_atl_0" style="" href="http://www.nextag.com/serv/main/buyer/MyPDir.jsp?list=_transCookieList&amp;cmd=add&amp;ptitle=678560038" rel="nofollow">+ Add to Shopping List</a> &nbsp;|&nbsp; <!-- endnxtginc --> <a rel="nofollow" id="mltLink_0" class="mlt-link" href="/Digital-Cameras--zz500001z2z678560038zB2dgz5---html">See More Like This</a> </div> <div id="fsLink_0" class="featuredSeller"> <a rel="nofollow" class="featuredSeller" id="opFSLink_0_0" href="/norob/PtitleSeller.jsp?chnl=main&amp;tag=785646073amp;ctx=x%2BN%2Fs9zy56l4u8RXCzALE1jeLesDMzeK09rPQEdK3Yjx395ZzX9cMh9N5JAxjk7xPqF9hjk2ztM5IRXU5nspLubIXYaVzI%2B%2Fg7h1Qz58TzgvrWuNawV8qEIqqSmClArWMq6mpzNRuSlgg2xCXYObNnaIH00iKSUmBawDRvecwbCpAxhXgXoLEiEinTwr3EipComdzxL9UHFYTLoWUToUB5SRSsolQmEJ3mgnnvu83%2FC8W34TGpN9mJo%2BnyAeTkt4&amp;ptitle=678560038" target="_blank" >Thundercameras</a>:$1,289 &nbsp; <a rel="nofollow" class="featuredSeller" id="opFSLink_0_1" href="/norob/PtitleSeller.jsp?chnl=main&amp;tag=797076595&amp;ctx=x%2BN%2Fs9zy56l4u8RXCzALE1jeLesDMzeK09rPQEdK3Yjx395ZzX9cMh9N5JAxjk7xPqF9hjk2ztM5IRXU5nspLubIXYaVzI%2B%2Fg7h1Qz58TzgvrWuNawV8qEIqqSmClArWMq6mpzNRuSlgg2xCXYObNrcWLhL%2BhryuAGhXNhYSPE%2BpAxhXgXoLEiEinTwr3EipComdzxL9UHFYTLoWUToUB5SRSsolQmEJ3mgnnvu83%2FC8W34TGpN9mJo%2BnyAeTkt4&amp;ptitle=678560038" target="_blank" >PhotoVideoSuperStore</a>:$1,269 &nbsp; <a rel="nofollow" class="featuredSeller" id="opFSLink_0_2" href="/norob/PtitleSeller.jsp?chnl=main&amp;tag=803555293&amp;ctx=x%2BN%2Fs9zy56l4u8RXCzALE1jeLesDMzeK09rPQEdK3Yjx395ZzX9cMh9N5JAxjk7xPqF9hjk2ztM5IRXU5nspLubIXYaVzI%2B%2Fg7h1Qz58TzgvrWuNawV8qEIqqSmClArWMq6mpzNRuSlgg2xCXYObNt06qcvLJ5UQz7S3zKd4urWpAxhXgXoLEiEinTwr3EipComdzxL9UHFYTLoWUToUB5SRSsolQmEJ3mgnnvu83%2FC8W34TGpN9mJo%2BnyAeTkt4&amp;ptitle=678560038" target="_blank" >Digitalelect</a>:$1,279 &nbsp;</div> I would think of : (1) digging out the product name from <td class="sr-image >tag,and using regular expression exp ="<td><span\\s+class=\"sr-image\"[^>]*>" + ".*?</span><a href=\"" + "([^\"]+)" + "\"[^>]*>" + "([^<]+)" + "</a>.*?</td>"; (2) digging out the product info from the <div class="sr-info-description"> tag. exp = "<div class="sr-info-description"> [^>]*>" (3) digging out the Sellers' names from <div id="fsLink_0" class="featuredSeller"> tag. exp = "<div id="fslink_0" class="featuredSeller[^>]*>" + ".*?</span><a rel=\"" + "([^\"]+)" + "\"[^>]*>" + "([^<]+)" + "</a>.*?</td>"; I am just beginning learing using Java Regular Expression,I would be grateful if you could correct me if I am in the wrong track or my regular expressiona are wrong. Thanks a lot,guys.

    Read the article

  • Qthread - trouble shutting down threads

    - by Bryan Greenway
    For the last few days, I've been trying out the new preferred approach for using QThreads without subclassing QThread. The trouble I'm having is when I try to shutdown a set of threads that I created. I regularly get a "Destroyed while thread is still running" message (if I'm running in Debug mode, I also get a Segmentation Fault dialog). My code is very simple, and I've tried to follow the examples that I've been able to find on the internet. My basic setup is as follows: I've a simple class that I want to run in a separate thread; in fact, I want to run 5 instances of this class, each in a separate thread. I have a simple dialog with a button to start each thread, and a button to stop each thread (10 buttons). When I click one of the "start" buttons, a new instance of the test class is created, a new QThread is created, a movetothread is called to get the test class object to the thread...also, since I have a couple of other members in the test class that need to move to the thread, I call movetothread a few additional times with these other items. Note that one of these items is a QUdpSocket, and although this may not make sense, I wanted to make sure that sockets could be moved to a separate thread in this fashion...I haven't tested the use of the socket in the thread at this point. Starting of the threads all seem to work fine. When I use the linux top command to see if the threads are created and running, they show up as expected. The problem occurs when I begin stopping the threads. I randomly (or it appears to be random) get the error described above. Class that is to run in separate thread: // Declaration class TestClass : public QObject { Q_OBJECT public: explicit TestClass(QObject *parent = 0); QTimer m_workTimer; QUdpSocket m_socket; Q_SIGNALS: void finished(); public Q_SLOTS: void start(); void stop(); void doWork(); }; // Implementation TestClass::TestClass(QObject *parent) : QObject(parent) { } void TestClass::start() { connect(&m_workTimer, SIGNAL(timeout()),this,SLOT(doWork())); m_workTimer.start(50); } void TestClass::stop() { m_workTimer.stop(); emit finished(); } void TestClass::doWork() { int j; for(int i = 0; i<10000; i++) { j = i; } } Inside my main app, code called to start the first thread (similar code exists for each of the other threads): mp_thread1 = new QThread(); mp_testClass1 = new TestClass(); mp_testClass1->moveToThread(mp_thread1); mp_testClass1->m_socket.moveToThread(mp_thread1); mp_testClass1->m_workTimer.moveToThread(mp_thread1); connect(mp_thread1, SIGNAL(started()), mp_testClass1, SLOT(start())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(quit())); connect(mp_testClass1, SIGNAL(finished()), mp_testClass1, SLOT(deleteLater())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(deleteLater())); connect(this,SIGNAL(stop1()),mp_testClass1,SLOT(stop())); mp_thread1->start(); Also inside my main app, this code is called when a stop button is clicked for a specific thread (in this case thread 1): emit stop1(); Sometimes it appears that threads are stopped and destroyed without issue. Other times, I get the error described above. Any guidance would be greatly appreciated. Thanks, Bryan

    Read the article

  • Does a site's bounce rate influence Google rankings?

    - by Joel Spolsky
    Does Google consider bounce rate or something similar in ranking sites? Background: here at Stack Exchange we noticed that the latest Google algorithm changes resulted in about a 20% dip in traffic to Server Fault (and a much smaller dip in traffic to Super User). Stack Overflow traffic was not affected. There was an article on WebProNews which hypothesized that bounce rate might be a ranking signal in Google's latest Panda update. According to Google Analytics, these are our bounce rates over the last month: Site Bounce Rate Avg Time on Site ------------- ----------- ---------------- SuperUser 84.67% 01:16 ServerFault 83.76% 00:53 Stack Overflow 63.63% 04:12 Now, technically, Google has no way to know the bounce rate. If you go to Google, search for something, and click on the first result, Google can't tell the difference between: a user who turns off their computer a user who goes to a completely different web site a user who spends hours clicking around on the website they landed on What Google does know is how long it takes the user to come back to Google and do another search. According to the book In The Plex (page 47), Google distinguishes between what they call "short clicks" and "long clicks": A short click is a search where the user quickly comes back to Google and does another search. Google interprets this as a signal that the first search results were unsatisfactory. A long click is a search where the user doesn't search again for a long time. The book says that Google uses this information internally, to judge the quality of their own algorithms. It also said that short click data in which someone retypes a slight variation of the search is used to fuel the "Did you mean...?" spell checking algorithm. So, my hypothesis is that Google has recently decided to use long click rates as a signal of a high quality site. Does anyone have any evidence of this? Have you seen any high-bounce-rate sites which lost traffic (or vice-versa)?

    Read the article

  • The Internet of Things & Commerce: Part 2 -- Interview with Brian Celenza, Commerce Innovation Strategist

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 2 (of 3) Welcome back to the second installation of my three part series on the Internet of Things & Commerce. A few weeks ago, I wrote “The Next 7,000 Days” about how we’ve become embedded in a digital architecture in the last 7,000 days since the birth of the internet – an architecture that everyday ties the massive expanse of the internet evermore closely with our physical lives. This blog series explores how this new blend of virtual and material will change how we shop and how businesses sell. Now enjoy reading my interview with Brian Celenza, one of the chief strategists in our Oracle Commerce innovation group. He comments on the past, present, and future of the how the growing Internet of Things relates and will relate to the buying and selling of goods on and offline. -------------------------------------------- QUESTION: You probably have one of the coolest jobs on our team, Brian – and frankly, one of the coolest jobs in our industry. As part of the innovation team for Oracle Commerce, you’re regularly working on bold features and groundbreaking commerce-focused experiences for our vision demos. As you look back over the past couple of years, what is the biggest trend (or trends) you’ve seen in digital commerce that started to bring us closer to this idea of what people are calling an “Internet of Things”? Brian: Well as you look back over the last couple of years, the speed at which change in our industry has moved looks like one of those blurred movement photos – you know the ones where the landscape blurs because the observer is moving so quickly your eye focus can’t keep up. But one thing that is absolutely clear is that the biggest catalyst for that speed of change – especially over the last three years – has been mobile. Mobile technology changed everything. Over the last three years the entire thought process of how to sell on (and offline) has shifted because of mobile technology advances. Particularly for eCommerce professionals who have started to move past the notion of “channels” for selling goods to this notion of “Mobile First”… then the Web site. Or more accurately, that everything – smartphones, web, store, tablet – is just one channel or has to act like one singular access point to the same product catalog, information and content. The most innovative eCommerce professionals realized some time ago that it’s not ideal to build an eCommerce Web site and then build everything on top of or off of it. Rather, they want to build an eCommerce API and then integrate it will all other systems. To accomplish this, they are leveraging all the latest mobile technologies or possibilities mobile technology has opened up: 4G and LTE, GPS, bluetooth, touch screens, apps, html5… How has this all started to come together for shopping experiences on and offline? Well to give you a personal example, I remember visiting an Apple store a few years ago and being amazed that I didn’t have to wait in line because a store associate knew everything about me from my ID – right there on the sales floor – and could check me out anywhere. Then just a few months later (when like any good addict) I went back to get the latest and greatest new gadget, I felt like I was stealing it because I could check myself out with my smartphone. I didn’t even need to see a sales associate OR go to a cash register. Amazing. And since then, all sort sorts of companies across all different types of industries – from food service to apparel –  are starting to see mobile payments in the billions of dollars now thanks not only to the convenience factor but to smart loyalty rewards programs as well. These are just some really simple current examples that come to mind. So many different things have happened in the last couple of years, it’s hard to really absorb all of the quickly – because as soon as you do, everything changes again! Just like that blurry speed photo image. For eCommerce, however, this type of new environment underscores the importance of building an eCommerce API – a platform that has services you can tap in to and build on as the landscape changes at a fever pitch. It’s a mobile first perspective. A web service perspective – particularly if you are thinking of how to engage customers across digital and physical spaces. —— QUESTION: Thanks for bringing us into the present – some really great examples you gave there to put things into perspective. So what do you see as the biggest trend right now around the “Internet of Things” – and what’s coming next few years? Brian: Honestly, even sitting where I am in the innovation group – it’s hard to look out even 12 months because, well, I don’t even think we’ve fully caught up with what is possible now. But I can definitely say that in the last 12 months and in the coming 12 months, in the technology and eCommerce world it’s all about iBeacons. iBeacons are awesome tools we have right now to tie together physical and digital shopping experiences. They know exactly where you are as a shopper and can communicate that to businesses. Currently there seem to be two camps of thought around iBeacons. First, many people are thinking of them like an “indoor GPS”, which to be fair they literally are. The use case this first camp envisions for iBeacons is primarily for advertising and marketing. So they use iBeacons to push location-based promotions to customers if they are close to a store or in a store. You may have seen these types of mobile promotions start to pop up occasionally on your smart phone as you pass by a store you’ve bought from in the past. That’s the work of iBeacons. But in my humble opinion, these promotions probably come too early in the customer journey and although they may be well timed and work to “convert” in some cases, I imagine in most they are just eroding customer trust because they are kind of a “one-size-fits-all” solution rather than one that is taking into account what exactly the customer might be looking for in that particular moment. Maybe they just want more information and a promotion is way too soon for that type of customer. The second camp is more in line with where my thinking falls. In this case, businesses take a more sensitive approach with iBeacons to customers’ needs. Instead of throwing out a “one-size-fits-all” to any passer by with iBeacons, the use case is more around looking at the physical proximity of a customer as an opportunity to provide a service: show expert reviews on a product they may be looking at in a particular aisle of a store, offer the opportunity to compare prices (and then offer a promotion), signal an in-store associate if a customer has been in the store for more than 10 minutes in one place. These are all less intrusive more value-driven uses of iBeacons. And they are more about building customer trust through service. To take this example a bit further into the future realm of “Big Data” and “Internet of Things” businesses could actually use the Oracle Commerce Platform and iBeacons to “silently” track customer movement w/in the store to provide higher quality service. And this doesn’t have to be creepy or intrusive. Simply if a customer has been in a particular department or aisle for more than a 5 or 10 minutes, an in-store associate could come over an offer some assistance already knowing customer preferences from their online profile and maybe even seeing the items in a shopping cart they started at home. None of this has to be revealed to the customer, but it certainly could boost the level of service an in-store sales associate could provide. Or, in another futuristic example, stores could use the digital footprint of the physical store transmitted by iBeacons to generate heat maps of the store that could be tracked over time. Imagine how much you could find out about which parts of the store are more busy during certain parts of the day or seasons. This could completely revolutionize how physical merchandising is deployed or where certain high value / new items are placed. And / or this use of iBeacons could also help businesses figure out if customers are getting held up in certain parts of the store during busy days like Black Friday. If long lines are causing customers to bounce from a physical store and leave those holiday gifts behind, maybe having employees with mobile check as an option could remove the cash register bottleneck. But going to back to my original statement, it’s all still very early in the story for iBeacons. The hardware manufacturers are still very new and there is still not one clear standard.  Honestly, it all goes back to building and maintaining an extensible and flexible platform for anywhere engagement. What you’re building today should allow you to rapidly take advantage of whatever unimaginable use cases wait around the corner. ------------------------------------------------------ I hope you enjoyed the brief interview with Brian. It’s really awesome to have such smart and innovation-minded individuals on our Oracle Commerce innovation team. Please join me again in a few weeks for Part 3 of this series where I interview one of the product managers on our team about how the blending of digital and in-store selling in influencing our product development and vision.

    Read the article

  • DNS - domain conflict?

    - by Stefanos.Ioannou
    I was given two domains: domain.com & domain.info (they are on GoDaddy). And I was also given two servers, 107.105.38.99 - Rails app and 107.107.90.17 - Wordpress platform, on Digital Ocean. At first, I was instructed to associate domain.com with the 107.107.38.99 (Rails app). Then I was instructed to de-associate this IP with domain.com and associated the 107.107.90.17 with the domain name domain.com. Then I was instructed to associated domain.info with the 107.107.38.99 (Rails app). Right now, when I go to domain.com the WordPress platform (107.107.90.17) loads fine and that is what is expected. But when I go to domain.info for the Rails app (107.107.38.99) I get redirected to domain.com. This is not expected and this is really weird for me. When I ping domain.info I get this: PING domain.info (107.107.38.99): 56 data bytes 64 bytes from 107.107.38.99: icmp_seq=0 ttl=50 time=74.601 ms Which is the expected result showing the correct IP but I don't understand why I get redirected to domain.com...(which when I ping is:) domain 64 bytes from 107.107.90.17: icmp_seq=0 ttl=50 time=75.057 ms The PTR Records on Digital Ocean are as follows: IP Address PTR Record 107.107.38.99 domain.info. 107.107.90.17 domain.com. and the DNS configurations on Digital Ocean are: domain.com A: @ 107.107.90.17 CNAME: * @ domain.info A: @ 107.107.38.99 CNAME: * @ I am not sure what the issue is, if you have any clue please let me know, I will be really grateful. If you need any other info let me know.

    Read the article

  • Want to book hotel stay using Bitcoin? Book using Expedia.com

    - by Gopinath
    The online travel booking leader Expedia announced that it started accepting Bitcoins for booking hotels on its website. For those who are new to Bitcoin, it is a digital currency in which transactions can be performed without the need for a central bank – its more like internet of currency. At the moment Expedia is accepting Bitcoin payments only for hotel bookings and in the future it may allow flights and vacation packages bookings. When Expedia customers wants to pay for a hotel using Bitcoin then they are transferred to Coinbase, a third party bit coin processor, for accepting the payments and then they are redirected back to Expedia.com to complete booking. This simple process would definitely drive mainstream adoption of Bitcoin – a win-win situation for digital currency users as well as for the travel company as they save a lot on card processing fees. Online retailers pay around 3% of transaction amount as fees for credit card processing companies like Visa & Master when they accept cards, but Coinbase charges a fee of just 1 percent for processing Bitcoins. Irrespective of customers adoption to Bitcoin based payments on Expedia as well as the savings on transaction fees, this move would give bragging rights to Expedia being the first e-commerce giant to accept digital currency! Image credit: Jonathan Caves

    Read the article

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • The Oldest Big Data Problem: Parsing Human Language

    - by dan.mcclary
    There's a new whitepaper up on Oracle Technology Network which details the use of Digital Reasoning Systems' Synthesys software on Oracle Big Data Appliance.  Digital Reasoning's approach is inherently "big data friendly," as it leverages multiple components of the Hadoop ecosystem.  Moreover, the paper addresses the oldest big data problem of them all: extracting knowledge from human text.   You can find the paper here.   From the Executive Summary: There is a wealth of information to be extracted from natural language, but that extraction is challenging. The volume of human language we generate constitutes a natural Big Data problem, while its complexity and nuance requires a particular expertise to model and mine. In this paper we illustrate the impressive combination of Oracle Big Data Appliance and Digital Reasoning Synthesys software. The combination of Synthesys and Big Data Appliance makes it possible to analyze tens of millions of documents in a matter of hours. Moreover, this powerful combination achieves four times greater throughput than conducting the equivalent analysis on a much larger cloud-deployed Hadoop cluster.

    Read the article

  • Weird vps server issue

    - by anon-user0
    I have an unmanaged linux vps Ubuntu 11.10 (Oneiric Ocelot). I have LNMP installed. Also php-fpm php-apc, varnish, memcache. I have (or rather had) several live sites on it. under normal load the server uses ~700 mb memory. But since last night its using only 20mb~ memory and a lot of the services seems to be down (according to htop) I only see nginx working and mysql starts up and goes does every few minutes on a loop. Here are some information on the server that might help you help me: root@server:~# uname -a Linux server 2.6.18-308.el5.028stab099.3 #1 SMP Wed Mar 7 15:56:00 MSK 2012 i686 i686 i386 GNU/Linux - root@server:~# ifconfig -a lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12515 errors:0 dropped:0 overruns:0 frame:0 TX packets:9541 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7191214 (7.1 MB) TX bytes:536726 (536.7 KB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:176.31.158.78 P-t-P:176.31.158.78 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 - root@server:~# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:http-alt *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp6 0 0 [::]:http-alt [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 9307368 @/com/ubuntu/upstart - htop: http://i.stack.imgur.com/NHKYX.png EDIT: Stressed. mind was not working adding log: root@server:~# less /var/log/syslog Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:39:01 server CRON[9298]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 05:40:01 server CRON[9463]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 05:46:21 server sm-msp-queue[9480]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:19:14, xdelay=00:06:18, mailer=relay, pri=122407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 05:52:39 server sm-msp-queue[9480]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:06:32, xdelay=00:06:18, mailer=relay, pri=842407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:00:01 server CRON[15671]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:06:22 server sm-msp-queue[15690]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:39:15, xdelay=00:06:18, mailer=relay, pri=212407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:09:01 server CRON[18114]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:12:40 server sm-msp-queue[15690]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:26:33, xdelay=00:06:18, mailer=relay, pri=932407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:20:02 server CRON[21888]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:26:22 server sm-msp-queue[21907]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:59:15, xdelay=00:06:18, mailer=relay, pri=302407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:27:02 server CRON[24021]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 06:32:40 server sm-msp-queue[21907]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:46:33, xdelay=00:06:18, mailer=relay, pri=1022407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:39:01 server CRON[27941]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:40:02 server CRON[28110]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:46:22 server sm-msp-queue[28125]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:19:15, xdelay=00:06:18, mailer=relay, pri=392407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:06:33, xdelay=00:06:18, mailer=relay, pri=1112407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: q5R2e4uo028125: sender notify: Warning: could not send message for past 4 hours Jun 27 06:52:44 server sm-msp-queue[28125]: q5R2e4uo028125: to=root, delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=33690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:00:02 server CRON[1543]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:06:21 server sm-msp-queue[1560]: q5R2e4uo028125: to=root, delay=00:13:41, xdelay=00:06:18, mailer=relay, pri=123690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:09:01 server CRON[3986]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 07:12:39 server sm-msp-queue[1560]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:45:32, xdelay=00:06:18, mailer=relay, pri=482407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:18:57 server sm-msp-queue[1560]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:32:50, xdelay=00:06:18, mailer=relay, pri=1202407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:20:02 server CRON[7760]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:26:22 server sm-msp-queue[7775]: q5R2e4uo028125: to=root, delay=00:33:42, xdelay=00:06:18, mailer=relay, pri=213690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:27:01 server CRON[9887]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 07:32:40 server sm-msp-queue[7775]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=02:05:33, xdelay=00:06:18, mailer=relay, pri=572407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:38:58 server sm-msp-queue[7775]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:52:51, xdelay=00:06:18, mailer=relay, pri=1292407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:39:01 server CRON[13813]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth : root@server:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 20G 2.3G 18G 12% / - Jun 26 16:22:41 server varnishd[1413]: Child (32425) died signal=3 Jun 26 16:22:41 server varnishd[1413]: child (21687) Started Jun 26 16:22:41 server varnishd[1413]: Child (21687) said Child starts Jun 26 16:22:41 server varnishd[1413]: Child (21687) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 26 16:34:28 server -- MARK -- Jun 26 16:54:29 server -- MARK -- Jun 26 17:14:29 server -- MARK -- Jun 26 17:34:29 server -- MARK -- Jun 26 17:54:29 server -- MARK -- Jun 26 18:14:29 server -- MARK -- Jun 26 18:34:29 server -- MARK -- Jun 26 18:54:29 server -- MARK -- Jun 26 19:14:29 server -- MARK -- Jun 26 19:34:29 server -- MARK -- Jun 26 19:54:29 server -- MARK -- Jun 26 20:14:29 server -- MARK -- Jun 26 20:34:29 server -- MARK -- Jun 26 20:48:12 server exiting on signal 15 Jun 26 20:51:58 server syslogd 1.5.0#6ubuntu1: restart. Jun 26 20:52:01 server varnishd[1324]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 26 21:11:58 server -- MARK -- Jun 26 21:31:58 server -- MARK -- Jun 26 21:51:58 server -- MARK -- Jun 26 22:11:58 server -- MARK -- Jun 26 22:31:58 server -- MARK -- Jun 26 22:51:58 server -- MARK -- Jun 26 23:11:58 server -- MARK -- Jun 26 23:31:58 server -- MARK -- Jun 26 23:51:58 server -- MARK -- Jun 27 00:11:58 server -- MARK -- Jun 27 00:23:42 server exiting on signal 15 Jun 27 02:21:10 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 02:21:12 server varnishd[1341]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 02:41:10 server -- MARK -- Jun 27 02:46:41 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:44 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:46 server varnishd[1238]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:20:46 server varnishd[1238]: child (1239) Started Jun 27 03:20:46 server varnishd[1238]: Child (1239) said Child starts Jun 27 03:20:46 server varnishd[1238]: Child (1239) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 27 03:32:52 server exiting on signal 15 Jun 27 03:33:16 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:33:31 server varnishd[1372]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:53:16 server -- MARK -- Jun 27 04:13:16 server -- MARK -- Jun 27 04:33:16 server -- MARK -- Jun 27 04:53:16 server -- MARK -- Jun 27 05:13:16 server -- MARK -- Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:53:17 server -- MARK -- Jun 27 06:13:17 server -- MARK -- Jun 27 06:33:17 server -- MARK -- Jun 27 06:53:17 server -- MARK -- Jun 27 07:13:17 server -- MARK -- Jun 27 07:33:17 server -- MARK -- Jun 27 07:53:17 server -- MARK -- Jun 27 08:13:17 server -- MARK -- Jun 27 08:33:17 server -- MARK -- Jun 27 08:53:17 server -- MARK -- Jun 27 09:13:17 server -- MARK -- Jun 27 09:33:17 server -- MARK -- Jun 27 09:53:17 server -- MARK -- Jun 27 10:13:17 server -- MARK -- Jun 27 10:33:17 server -- MARK -- Jun 27 10:53:17 server -- MARK -- Jun 27 11:13:17 server -- MARK -- Jun 27 11:33:17 server -- MARK -- Jun 27 11:53:18 server -- MARK -- Jun 27 12:13:18 server -- MARK -- Jun 27 12:33:18 server -- MARK -- Jun 27 12:53:18 server -- MARK -- Jun 27 13:13:18 server -- MARK -- Jun 27 13:33:18 server -- MARK -- Jun 27 13:53:18 server -- MARK -- Jun 27 14:13:18 server -- MARK -- Jun 27 14:33:18 server -- MARK -- Jun 27 14:53:18 server -- MARK -- -- root@server:~# cat /var/log/nginx/error.log 2012/06/27 03:32:54 [alert] 1199#0: worker process 1203 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1200 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1201 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1202 exited on signal 9 root@server:~# cat /var/log/nginx/access.log 31.210.99.87 - - [27/Jun/2012:09:09:08 +0400] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 172 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cms/cmx.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /iesvc/iesvc.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cmd2/index.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:09 +0400] "GET /cmd/index.jsp HTTP/1.1" 301 184 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:19 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:37 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:48 +0400] "-" 400 0 "-" "-" - root@server:~# cat /var/log/daemon.log Jun 26 20:48:10 server xinetd[1177]: Exiting... Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 26 20:51:58 server xinetd[1174]: Started working: 0 available services Jun 26 20:52:01 server vnstatd[1330]: vnStat daemon 1.11 started. Jun 26 20:52:01 server vnstatd[1330]: Monitoring: venet0 Jun 27 00:23:41 server xinetd[1174]: Exiting... Jun 27 02:21:12 server vnstatd[1349]: vnStat daemon 1.11 started. Jun 27 02:21:12 server vnstatd[1349]: Monitoring: venet0 Jun 27 03:20:44 server xinetd[1166]: attribute: disable should not be in default section [file=/etc/xinetd.conf] [line=12] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/chargen [file=/etc/xinetd.conf] [line=15] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 27 03:20:44 server xinetd[1166]: Started working: 0 available services Jun 27 03:20:46 server vnstatd[1249]: vnStat daemon 1.11 started. Jun 27 03:20:46 server vnstatd[1249]: Monitoring: venet0 Jun 27 03:32:41 server xinetd[1166]: Exiting... Jun 27 03:33:32 server vnstatd[1380]: vnStat daemon 1.11 started. Jun 27 03:33:32 server vnstatd[1380]: Monitoring: venet0 root@server:~# - Anything else you need let me know

    Read the article

  • PyGTK: dynamic label wrapping

    - by detly
    It's a known bug/issue that a label in GTK will not dynamically resize when the parent changes. It's one of those really annoying small details, and I want to hack around it if possible. I followed the approach at 16 software, but as per the disclaimer you cannot then resize it smaller. So I attempted a trick mentioned in one of the comments (the set_size_request call in the signal callback), but this results in some sort of infinite loop (try it and see). Does anyone have any other ideas? (You can't block the signal just for the duration of the call, since as the print statements seem to indicate, the problem starts after the function is left.) The code is below. You can see what I mean if you run it and try to resize the window larger and then smaller. (If you want to see the original problem, comment out the line after "Connect to the size-allocate signal", run it, and resize the window bigger.) The Glade file ("example.glade"): <?xml version="1.0"?> <glade-interface> <!-- interface-requires gtk+ 2.16 --> <!-- interface-naming-policy project-wide --> <widget class="GtkWindow" id="window1"> <property name="visible">True</property> <signal name="destroy" handler="on_destroy"/> <child> <widget class="GtkLabel" id="label1"> <property name="visible">True</property> <property name="label" translatable="yes">In publishing and graphic design, lorem ipsum[p][1][2] is the name given to commonly used placeholder text (filler text) to demonstrate the graphic elements of a document or visual presentation, such as font, typography, and layout. The lorem ipsum text, which is typically a nonsensical list of semi-Latin words, is a hacked version of a Latin text by Cicero, with words/letters omitted and others inserted, but not proper Latin[1][2] (see below: History and discovery). The closest English translation would be "pain itself" (dolorem = pain, grief, misery, suffering; ipsum = itself).</property> <property name="wrap">True</property> </widget> </child> </widget> </glade-interface> The Python code: #!/usr/bin/python import pygtk import gobject import gtk.glade def wrapped_label_hack(gtklabel, allocation): print "In wrapped_label_hack" gtklabel.set_size_request(allocation.width, -1) # If you uncomment this, we get INFINITE LOOPING! # gtklabel.set_size_request(-1, -1) print "Leaving wrapped_label_hack" class ExampleGTK: def __init__(self, filename): self.tree = gtk.glade.XML(filename, "window1", "Example") self.id = "window1" self.tree.signal_autoconnect(self) # Connect to the size-allocate signal self.get_widget("label1").connect("size-allocate", wrapped_label_hack) def on_destroy(self, widget): self.close() def get_widget(self, id): return self.tree.get_widget(id) def close(self): window = self.get_widget(self.id) if window is not None: window.destroy() gtk.main_quit() if __name__ == "__main__": window = ExampleGTK("example.glade") gtk.main()

    Read the article

  • Why does my ko computed observable not update bound UI elements when its value changes?

    - by Allen
    I'm trying to wrap a cookie in a computed observable (which I'll later turn into a protectedObservable) and I'm having some problems with the computed observable. I was under the opinion that changes to the computed observable would be broadcast to any UI elements that have been bound to it. I've created the following fiddle JavaScript: var viewModel = {}; // simulating a cookie store, this part isnt as important var cookie = function () { // simulating a value stored in cookies var privateZipcode = "12345"; return { 'write' : function (val) { privateZipcode = val; }, 'read': function () { return privateZipcode; } } }(); viewModel.zipcode = ko.computed({ read: function () { return cookie.read(); }, write: function (value) { cookie.write(value); }, owner: viewModel }); ko.applyBindings(viewModel);? HTML: zipcode: <input type='text' data-bind="value: zipcode"> <br /> zipcode: <span data-bind="text: zipcode"></span>? I'm not using an observable to store privateZipcode since that's really just going to be in a cookie. I'm hoping that the ko.computed will provide the notifications and binding functionality that I need, though most of the examples I've seen with ko.computed end up using a ko.observable underneath the covers. Shouldn't the act of writing the value to my computed observable signal the UI elements that are bound to its value? Shouldn't these just update? Workaround I've got a simple workaround where I just use a ko.observable along side of my cookie store and using that will trigger the required updates to my DOM elements but this seems completely unnecessary, unless ko.computed lacks the signaling / dependency type functionality that ko.observable has. My workaround fiddle, you'll notice that the only thing that changes is that I added a seperateObservable that isn't used as a store, its only purpose is to signal to the UI that the underlying data has changed. // simulating a cookie store, this part isnt as important var cookie = function () { // simulating a value stored in cookies var privateZipcode = "12345"; // extra observable that isnt really used as a store, just to trigger updates to the UI var seperateObservable = ko.observable(privateZipcode); return { 'write' : function (val) { privateZipcode = val; seperateObservable(val); }, 'read': function () { seperateObservable(); return privateZipcode; } } }(); This makes sense and works as I'd expect because viewModel.zipcode depends on seperateObservable and updates to that should (and does) signal the UI to update. What I don't understand, is why doesn't a call to the write function on my ko.computed signal the UI to update, since that element is bound to that ko.computed? I suspected that I might have to use something in knockout to manually signal that my ko.computed has been updated, and I'm fine with that, that makes sense. I just haven't been able to find a way to accomplish that.

    Read the article

  • linked list elements gone?

    - by Hristo
    I create a linked list dynamically and initialize the first node in main(), and I add to the list every time I spawn a worker process. Before the worker process exits, I print the list. Also, I print the list inside my sigchld signal handler. in main(): head = NULL; tail = NULL; // linked list to keep track of worker process dll_node_t *node; node = (dll_node_t *) malloc(sizeof(dll_node_t)); // initialize list, allocate memory append_node(node); node->pid = mainPID; // the first node is the MAIN process node->type = MAIN; in a fork()'d process: // add to list dll_node_t *node; node = (dll_node_t *) malloc(sizeof(dll_node_t)); append_node(node); node->pid = mmapFileWorkerStats->childPID; node->workerFileName = mmapFileWorkerStats->workerFileName; node->type = WORK; functions: void append_node(dll_node_t *nodeToAppend) { /* * append param node to end of list */ // if the list is empty if (head == NULL) { // create the first/head node head = nodeToAppend; nodeToAppend->prev = NULL; } else { tail->next = nodeToAppend; nodeToAppend->prev = tail; } // fix the tail to point to the new node tail = nodeToAppend; nodeToAppend->next = NULL; } finally... the signal handler: void chld_signalHandler() { dll_node_t *temp1 = head; while (temp1 != NULL) { printf("2. node's pid: %d\n", temp1->pid); temp1 = temp1->next; } int termChildPID = waitpid(-1, NULL, WNOHANG); dll_node_t *temp = head; while (temp != NULL) { if (temp->pid == termChildPID) { printf("found process: %d\n", temp->pid); } temp = temp->next; } return; } Is it true that upon the worker process exiting, the SIGCHLD signal handler is triggered? If so, that would mean that after I print the tree before exiting, the next thing I do is in the signal handler which is print the tree... which would mean i would print the tree twice? But the tree isn't the same. The node I add in the worker process doesn't exist when I print in the signal handler or at the very end of main(). Any idea why? Thanks, Hristo

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >