Search Results

Search found 2342 results on 94 pages for 'valter minute'.

Page 12/94 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • BPM PS6 video showing process lifecycle in more detail (30min) by Mark Nelson

    - by JuergenKress
    If the five minute video I shared last week has whet your appetite for more, then this might be just what you are looking for! The same international team that has made that video - Andrew Dorman, Tanya Williams, Carlos Casares, Joakim Suarez and James Calise – have also created a thirty minute version that walks through in much more detail and shows you, from the perspective of various business stakeholders involved in process modeling, exactly how BPM PS6 supports the end to end process lifecycle. The video centres around a Retail Leasing use case, and follows how Joakim the Business Analyst, Pablo the Process Owner, and James the Process Analyst take the process from conception to runtime, solely through BPM Composer, without the need for IT or the use of JDeveloper. Joakim, the Business Analyst, models the process, designs the user interaction forms, and creates business rules, Pablo, the Process Owner, reviews the process documentation and tests the process using the new ‘Process Player’, James, the Process Analyst, analyses the process and identifies potential bottle necks using ‘Process Simulation’. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM PS6,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How do I stop my ethernet network connection from dropping?

    - by Sean Hill
    My ethernet-based network connection doesn't stay up consistently. I'm running a ping against the gateway and it will: Work for a minute Freeze, time out, or give multi-second response times Repeat If it's stuck and I disable/enable networking through the network manager applet everything will work fine again for a minute. After 280 packets transmitted I'm getting 41% packet loss. I've tried a different cable and connection to the gateway but this had no effect. The distance to the gateway is just about 3 feet. Seems to work fine if I switch over to Windows, but Ubuntu is my main OS and I can't even use it right now as I depend on the network. My setup... OS: Ubuntu 11.04, dual-booting Windows 7 Mobo: Gigabyte Z68X-UD4-B3 CPU: Intel Core i7 2600K Edit A little clarification... Network Manager is still showing me as connected, but I am unable to reach to gateway or anything beyond. At no point does NM suggest the connection is lost and calling ifconfig shows that I still have an IP address. I tried connecting to a different gateway with a different cable and the same problem arises. As requested: lspci | grep -i eth 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) dmesg | tail -f [ 14.024709] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro,commit=0 [ 14.026443] EXT4-fs (sda7): re-mounted. Opts: commit=0 [ 14.176101] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. [ 23.917731] eth0: no IPv6 routers present [ 726.109697] r8169 0000:07:00.0: eth0: link up [ 733.169494] r8169 0000:07:00.0: eth0: link up [ 753.930119] r8169 0000:07:00.0: eth0: link up [ 880.787332] r8169 0000:07:00.0: eth0: link up [ 1159.161283] r8169 0000:07:00.0: eth0: link up [ 1406.623550] r8169 0000:07:00.0: eth0: link up Edit @roland-taylor: Network is always available under Windows. Pings do not timeout, applications do not complain of no network availability, large downloads are not interrupted or slowed.

    Read the article

  • Oracle Database 12c By Example – SQL Developer and Multitenant

    - by thatjeffsmith
    As you may have heard, Oracle Database 12c is now available. In addition to the binaries and docs going out, we also published a few new Oracle By Example (OBE) chapters. You can find those links here on our product page. Do you know who found these, practically the minute they were published? An enterprising DBA-extraordinaire who was just happening to be presenting at the ODTUG KScope13 conference in New Orleans. He thought it would be a good idea to download the new software over a hotel WIFI, install and create a new multitenant database, watch a few OBEs, and then demo that live for his ‘SQL Developer for DBAs‘ session. Pretty crazy, right? Well, he did it, and I was there to watch. Way cool. You can listen to @leight0nn tell his story in his own words via this ODTUG interview with @oraclenered. In case you’re too giddy to sit through the video, I’ll give you a preview – he succesfully cloned a pluggable database in about a minute with only a couple of clicks using Oracle SQL Developer 3.2.20.09 while connected to a 12c database.

    Read the article

  • Tips for achieving "continual" delivery

    - by Ben
    A team is experiencing difficulty releasing software on a frequent basis (once every week). What follows is a typical release timeline: During the iteration: Developers work on stories on the backlog on short-lived (this is enthusiastically enforced) feature branches based on the master branch. Developers frequently pull their feature branches into the integration branch, which is continually built and tested (as far as the test coverage goes) automatically. The testers have the ability to auto-deploy integration to a staging environment and this occurs multiple times per week, enabling continual running of their test suites. Every Monday: there is a release planning meeting to determine which stories are "known good" (based on the testers' work), and hence will be in the release. If there is a known issue with a story, the source branch is pulled out of integration. no new code (only bug fixes requested by the testers) may be pulled into integration on this Monday to ensure the testers have a stable codebase to cut a release from. Every Tuesday: The testers have tested the integration branch as much as they possibly can have given the time available and there are no known bugs so a release is cut and pushed out to the production nodes slowly. This sounds OK in practise, but we have found that it is incredibly difficult to achieve. The team sees the following symptoms "subtle" bugs are found on production that were not identified on the staging environment. last minute hot-fixes continue into the Tuesday. problems on the production environment require roll-backs which blocks continued development until a successful live deployment is achieved and the master branch can be updated (and hence branched from). I think test coverage, code quality, ability to regression test quickly, last minute changes and environmental differences are at play here. Can anyone offer any advice regarding how best to achieve "continual" delivery?

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Wired connection problem through router

    - by tommypincha
    I'm having trouble to connect my Ubuntu 11.10 to internet through ethernet. I installed a router to get Wi-Fi and now (by a wired connection) I can't have internet with ubuntu (but I can with Windows 7). I see several attempts per minute of the network-manager to get a connection, but after a minute it stops trying. Here are a couple of outputs from key files: cat /etc/network/interfaces auto lo iface lo inet loopback and ifconfig -a eth0 Link encap:Ethernet HWaddr 00:16:76:e4:a6:e8 inet6 addr: fe80::216:76ff:fee4:a6e8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:117 dropped:0 overruns:0 frame:117 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:12221 (12.2 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:582 errors:0 dropped:0 overruns:0 frame:0 TX packets:582 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:46024 (46.0 KB) TX bytes:46024 (46.0 KB) I tried reconnecting the modem and the router and reconnecting the ethernet cable but nothing... I tried other solutions from other posts (this one has a similar issue Wired connection not working) but again nothing. My IP is dynamic. A couple of things I see and did: I see no inet addr, only inet6. I ignored ipv6 from the internet connections, and restarted the network-manager service and nothing. A difference with the post I mentioned is the RX packets with errors I have, is this a clue of the problem? Any help would be appreciated, Thanks!

    Read the article

  • The clock problem - to if or not to if?

    - by trejder
    Let's say, we have a simple digital clock. To "power" it, we use a routine executed every second. We update seconds part in it. But, what about minutes and hours part? What is better / more professional / offers better performance: Ignore all checking and update hour, minute and seconds part each time, every second. Use if + a variable for checking, if 60 (or 3600) seconds passed and update minute / hour part only at that precise moments. This leads us to a question, what is better -- unnecessary drawings (first approach) or extra ifs? I've just spotted a Javascript digital clock, one of millions similar on one of billions pages. And I noticed that all three parts (hours, minutes and seconds) are updated every second, though first changes its value only once per 3600 seconds and second once per 60 seconds. I'm not to experienced developer, so I might me wrong. But everything, what I've learnt up until now, tells me, that if are far better then executing drawing / refreshing sequences only to draw the same content.

    Read the article

  • If your algorithm is correct, does it matter how long it took you to write it?

    - by John Isaacks
    I recently found out that Facebook had a programming challenge that if completed correctly you automatically get a phone interview. There is a sample challenge that asks you to write an algorithm that can solve a Tower of Hanoi type problem. Given a number of pegs and discs, an initial and final configuration; Your algorithm must determine the fewest steps possible to get to the final configuration and output the steps. This sample challenge gives you a 45 minute time limit but allows you to still test your code to see if it passes once your time limit expires. I did not know of any cute math solution that could solve it, and I didn't want to look for one since I think that would be cheating. So I tried to solve the challenge the best I could on my own. I was able to make an algorithm that worked and passed. However, it took me over 4 hours to make, much longer than the 45 minute requirement. Since it took me so much longer than the allotted time, I have not attempted the actual challenge. This got me wondering though, in reality does it really matter that it took me that long? I mean is this a sign that I will not be able to get a job at a place like this (not just Facebook, but Google, Fog Creek, etc.) and need to lower my aspirations, or does the fact that I actually passed on my first attempt even though it took too long be taken as good?

    Read the article

  • hd0 out of disk error results to low graphics mode

    - by msPeachy
    Yesterday, I have reinstalled Ubuntu due to a error: hd0 out of disk on boot. Everything went fine, I've installed apps, perform updates and upgraded the kernel. I've even restarted it a few times just to check if I would encounter boot issues and was glad that everything was working perfectly, then powered it down. The next morning when I boot, I got this error: hd0 out of disk error. Press any key to continue... again! After pressing a key, it took 10 minutes for the Ubuntu logo to appear with it's 5 dots. After another 5 minutes, Ubuntu started checking the disk and displayed a message that / has errors, I pressed F to fix the errors. After which Ubuntu tells me that /tmp is not yet ready for mounting so I pressed S to skip mounting it, then Ubuntu restarted. On boot I saw the error: hd0 out of disk error. Press any key to continue... again. This time it took only a minute for the Ubuntu logo to appear and after another minute a dialog box appeared with the following message: The system is running in low-graphics mode. Your screen, graphics card, and input settings could not be detected correctly. You will have to configure these yourself. What would you like to do? Run in low-graphics mode for just one session Reconfigure graphics Troubleshoot the error Exit to console login Whichever option I choose I ended up with a console prompt: grub-editenv: error: cannot read the file /boot/grub/grubenv. _ I can't do anything on this console, whatever I type nothing happens. I've rebooted several times and I get same error every time. I don't quite understand what is wrong with Ubuntu or with my installation. I've encountered this hd0 out of disk error several times already and always ended up reinstalling. I'd really really appreciate it if you guys can help me fix this. Thank you and good day.

    Read the article

  • Multiple Issues - USB booting Ubuntu 12.04

    - by Pixelishus
    I've been using a Ubuntu 12.04 off a bootable USB stick so I can have a portable OS. Also, OS variety. I've been using it for a while and I'm still have a few issues. First and most annoying, it often lags/freezes. I'm not sure what word you would use for it, windows will often stop working and go dark/not responding from anywhere between 10 seconds to a full minute and on some occasions, even longer. However, the window eventually starts working again. Similarly, sometimes the whole system will freeze, not just the window. The mouse will still move, but nothing will work. No clicking, no menus, not keyboard shortcuts. Again, it will usually start working. I'm liking Ubuntu a lot, but those issues can make it annoying to use sometimes. For example, it will ALWAYS freeze at some point if I try to watching a Youtube video and I'll have to wait for a minute or so until it starts responding, again. Aside from the lag/freezing, anytime I download packages, it will always say "package operation failed" when it's done, though it does seem to download/install. Another issue I'm having is with shutting down. If I open the logout/shutdown menu and click shutdown, it just logs me out and takes me to the login screen. If I try shutting down through the login screen, it won't do anything. As if I didn't even go to it. I've been using the terminal to reboot or shutdown when I need to. I've looked around for answers for all of these problems and still have yet to find a solution that works. Are these just normal issues with USB booting? I haven't installed Ubuntu to any computers, I've always done USB booting.

    Read the article

  • How can I speed up boot on one of my machines?

    - by Korneel Bouman
    I have a Gateway all in one machine (2 gig Intel Core 2 Duo T7250 dual core processor, 2 gig RAM - full specs) on which I installed 10.10. Once it has booted it's fine, but it takes forever to boot. This is what happens: 1. Boot starts with cursor flashing for about 10-15 seconds 2. Cursor disappears for 1.5 - 2 minutes 3. Cursor reappears, blinks a few seconds more, boot finishes in another 10 seconds 4. Login screen I have another machine with marginal better specs that boots up in no time (basically the above minus the two minute delay). Things I've done: enabled verbose mode for grub nothing is showing until after 2 minute pause. checked syslog last message before pause is a message from alsa saying the process is already running (or something similar... going from memory here...) It could be something sound related as the built in speakers are not working (sound card is recognized though and headphones work). Anyway, it's not the end of the world, but it's annoying and I'd like to know what's going on... Many thanks, and let me know if more info is needed.

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • WCF Windows Service TimeOut

    - by rmdussa
    I have a client application developed in .net seding a request to wcf service and supposed to send reponse .if execution time with in 1 minute,there is no error,if it exceeds 1 minute the error is Inner exception: This request operation sent to net.tcp://localhost:18001/PitToPort/2008/01/30/StockpileService/tcp did not receive a reply within the configured timeout (00:01:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client how to increase the time out and how?What is the best solution.

    Read the article

  • ASP.NET Session does not timeout on using ReportViewer Control.

    - by Saurabh
    Hi, We are using the ReportViewer control to display SSRS reports in our ASP.NET application. On pages where we use the ReportViewer control the session does not time out. The reason for this is the ReportViewer control emits a "setTimeOut" javascript function which reads the Session timeout value from the web.config and pings the server 1 minute before the configured value and keeps the session alive. For example, if the session timeout value is 5 minutes, the ReportViewer pings the server on the 4th minute. We used fidldler to verify this behavior. In addition, if we remove the ReportViewer control from the page, the sessions times out as expected. We also tried using the ReportViewer control in a sample application and observed the same behaviour. Has anyone faced this issue? Regards, Saurabh

    Read the article

  • Slow SQL Sync with Microsoft Sync Framework on Mobile Client

    - by Malkier
    Hello, we are developing an application which uses MS Sync Framework to sync data between Windows CE 6.0 with SQL CE 3.5 SP1 Clients and an SQL 2008 Database. Our major problem is a slow sync time up to 1 minute for 15 tables which are totally empty. Here's a break down of our components: Server: Sql Server 2008 15 tables with activated change tracking WCF Service with endpoint for the mobile sync (uses Sync Framework 2.0) Client (Mobile) Windows CE 6.0 NET Application using Sync Framework for Devices (CTP 1) which starts the sync As I mentioned above, the sync takes up to 1 minute without any changes and empty tables. The mobile device is in its dock. This is a deal breaker for a production environment. Does anybody have any experience in this field? Is there a way to improve things? Thanks for any responses.

    Read the article

  • getExtra from Intent launched from a pendingIntent

    - by spagi
    Hi. I am trying to make some alarms after the user selects something with a time from a list and create a notification for it at the given time. My problem is that the "showname" that a putExtra on my Intent cant be received at the broadcast receiver. It always get null value. This is the way I do it for most of my intents but I think this time maybe because of the pendingIntent or the broadcastReceiver something need to be done differentelly. Thank you The function that sends the Intent through the pending intent public void setAlarm(String showname,String time) { String[] hourminute=time.split(":"); String hour = hourminute[0]; String minute = hourminute[1]; Calendar rightNow = Calendar.getInstance(); rightNow.set(Calendar.HOUR_OF_DAY, Integer.parseInt(hour)); rightNow.set(Calendar.MINUTE, Integer.parseInt(minute)); rightNow.set(Calendar.SECOND, 0); long t=rightNow.getTimeInMillis(); long t1=System.currentTimeMillis(); try { Intent intent = new Intent(this, alarmreceiver.class); Bundle c = new Bundle(); c.putString("showname", showname);//This is the value I want to pass intent.putExtras(c); PendingIntent pendingIntent = PendingIntent.getBroadcast(this, 12345, intent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService(ALARM_SERVICE); alarmManager.set(AlarmManager.RTC_WAKEUP, rightNow.getTimeInMillis(),pendingIntent); //Log.e("ALARM", "time of millis: "+System.currentTimeMillis()); Toast.makeText(this, "Alarm set", Toast.LENGTH_LONG).show(); } catch (Exception e) { Log.e("ALARM", "ERROR IN CODE:"+e.toString()); } } And this is the receiving end public class alarmreceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { // Toast.makeText(context, "Alarm worked.", Toast.LENGTH_LONG).show(); Bundle b = intent.getExtras(); String showname=b.getString("showname");//This is where I suppose to receive it but its null NotificationManager manger = (NotificationManager) context .getSystemService(context.NOTIFICATION_SERVICE); Notification notification = new Notification(R.drawable.icon, "TVGuide ?pe???µ?s?", System.currentTimeMillis()); PendingIntent contentIntent = PendingIntent.getActivity(context, 0, new Intent(context, tvguide.class), 0); notification.setLatestEventInfo(context, "?? ?????aµµa ?e????se", showname, contentIntent); notification.flags = Notification.FLAG_ONLY_ALERT_ONCE; notification.sound = Uri.parse("file:///sdcard/dominating.mp3"); notification.vibrate = new long[]{100, 250, 100, 500}; manger.notify(1, notification); } }

    Read the article

  • Filtering MySQL query result according to a interval of timestamp

    - by celalo
    Let's say I have a very large MySQL table with a timestamp field. So I want to filter out some of the results not to have too many rows because I am going to print them. Let's say the timestamps are increasing as the number of rows increase and they are like every one minute on average. (Does not necessarily to be exactly once every minute, ex: 2010-06-07 03:55:14, 2010-06-07 03:56:23, 2010-06-07 03:57:01, 2010-06-07 03:57:51, 2010-06-07 03:59:21 ...) As I mentioned earlier I want to filter out some of the records, I do not have specific rule to do that, but I was thinking to filter out the rows according to the timestamp interval. After I achieve filtering I want to have a result set which has a certain amount of minutes between timestamps on average (ex: 2010-06-07 03:20:14, 2010-06-07 03:29:23, 2010-06-07 03:38:01, 2010-06-07 03:49:51, 2010-06-07 03:59:21 ...) Last but not least, the operation should not take incredible amount of time, I need this functionality to be almost fast as a normal select operation. Do you have any suggestions?

    Read the article

  • How to split string in group in vb.net

    - by amol kadam
    Hi. i'm amol kadam,I want to know how to split string in two part.My string is in Time format (12:12).& I want to seperate this in hour & minute format.the datatype for all variables are string. for hour variable used strTimeHr & for minute strTimeMin .I tried below code but their was a exception "Index and length must refer to a location within the string. Parameter name: length" If Not (objDS.Tables(0).Rows(0)("TimeOfAccident") Is Nothing Or objDS.Tables(0).Rows(0)("TimeOfAccident") Is System.DBNull.Value) Then strTime = objDS.Tables(0).Rows(0)("TimeOfAccident") 'strTime taking value 12:12 index = strTime.IndexOf(":") 'index taking value 2 lastIndex = strTime.Length 'Lastindex taking value 5 strTimeHr = strTime.Substring(0, index) 'strTime taking value 12 correctly strTimeMin = strTime.Substring(index + 1, lastIndex) 'BUT HERE IS PROBLEM OCCURE strTimeMin Doesn't taking any value Me.NumUpDwHr.Text = strTimeHr Me.NumUpDwMin.Text = strTimeMin End If

    Read the article

  • write error: Broken pipe

    - by Fahim
    Hi, I have to run a tool on around 300 directories. Each run take around 1 minute to 30 minute or even more than that. So, I wrote a python script having a loop to run the tool on all directories one after another. my python script has code something like: for directory in directories: os.popen('runtool_exec ' + directory) But when I run the python script I get the following error messages repeatedly: .. tail: write error: Broken pipe date: write error: Broken pipe .. All I do is login on a remote server using ssh where the tool, python script, and subject directories are kept. When I individually run the tool from command prompt using command like: runtool_exec directory it works fine. "broken pipe" error is coming only when I run using the python script. Any idea, workaround? Please suggest. Thanks. Fahim

    Read the article

  • T-SQL - Date rounding and normalization

    - by arun prakash
    Hi: I have a stored procedure that rounds a column with dates in (yyyy:mm:dd hh:mM:ss) to the nearest 10 minute handle (yyyy:mm:dd hh:mM) 20100303 09:46:3000 ------ 20100303 09:50 but i want to chage it to round it off to the nearest 15 minute handle: 20100303 09:46:3000 ------20100303 09:45 here is my code : IF OBJECT_ID(N'[dbo].[SPNormalizeAddWhen]') IS NOT NULL DROP PROCEDURE [dbo].[SPNormalizeAddWhen] GO CREATE PROCEDURE [dbo].[SPNormalizeAddWhen] As declare @colname nvarchar(20) set @colname='Normalized Add_When' if not exists (select * from syscolumns where id=object_id('Risk') and name=@colname) exec('alter table Risk add [' + @colname + '] datetime') declare @sql nvarchar(500) set @sql='update Risk set [' + @colname + ']=cast(DATEPART(yyyy,[add when]) as nvarchar(4)) + ''-'' + cast(DATEPART(mm,[add when]) as nvarchar(2)) + ''-'' + cast(DATEPART(dd,[add when]) as nvarchar(2)) + '' '' + cast(DATEPART(Hh,[add when]) as nvarchar(2)) + '':'' + cast(round(DATEPART(Mi,[add when]),-1) as nvarchar(2)) ' print @sql exec(@sql) GO

    Read the article

  • How do I make "simple" throughput servlet-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • How to sound audible bell from crontab

    - by user1526251
    The command line: /bin/echo -e "\007" in bash will ring the bell. With the line: /bin/echo -e "\007" in my crontab I expected the bell to ring every minute, but it's silent. I know crontab is working because the line: /bin/touch $HOME/jkjkjk updates the file jkjkjk every minute as it should. I found a posting some years ago suggesting that standard output should be directed to /dev/tty1 in crontab. But the line: /bin/echo "\007" /dev/tty1 Still fails. What to try next?

    Read the article

  • T-Sql SPROC - Parse C# Datatable to XML in Database (SQL Server 2005)

    - by Goober
    Scenario I've got an application written in C# that needs to dump some data every minute to a database. Because its not me that has written the spec, I have been told to store the data as XML in the SQL Server database and NOT TO USE the "bulk upload" feature. Essentially I just wanted to have a single stored procedure that would take XML (that I would produce from my datatable) and insert it into the database.....and do this every minute. Current Situation I've heard about the use of "Sp_xml_preparedocument" but I'm struggling to understand most of the examples that I've seen (My XML is far narrower than my C Sharp ability). Question I would really appreciate someone either pointing me in the direction of a worthwhile tutorial or helping explain things. EDIT - Using (SQL Server 2005)

    Read the article

  • What do I use when a cron job isn't enough? (php)

    - by mike
    I'm trying to figure out the most efficient way to running a pretty hefty PHP task thousands of times a day. It needs to make an IMAP connection to Gmail, loop over the emails, save this info to the database and save images locally. Running this task every so often using a cron isn't that big of a deal, but I need to run it every minute and I know eventually the crons will start running on top of each other and cause memory issues. What is the next step up when you need to efficiently run a task multiple times a minute? I've been reading about beanstalk & pheanstalk and I'm not entirely sure if that will do what I need. Thoughts???

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >