Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 134/307 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • How do I explain this to potential employers?

    - by ReferencelessBob
    Backstory: TL;DR: I've gained a lot of experience working for 5 years at one startup company, but it eventually failed. The company is gone and the owner MIA. When I left sixth-form college I didn't want to start a degree straight away, so when I met this guy who knew a guy who was setting up a publishing company and needed a 'Techie' I thought why not. It was a very small operation, he sent mailings to schools, waited for orders to start arriving, then ordered a short run of the textbooks to be printed, stuck them in an envelope posted them out. I was initially going to help him set up a computerized system for recording orders and payments, printing labels, really basic stuff and I threw it together in Access in a couple of weeks. He also wanted to start taking orders online, so I set up a website and a paypal business account. While I was doing this, I was also helping to do the day-to-day running of things, taking phone orders, posting products, banking cheques, ordering textbooks, designing mailings, filing end of year accounts, hiring extra staff, putting stamps on envelopes. I learned so much about things I didn't even know I needed to learn about. Things were pretty good, when I started we sold about £10,000 worth of textbooks and by my 4th year there we sold £250,000 worth of text books. Things were looking good, but we had a problem. Our best selling product had peaked and sales started to fall sharply, we introduced add on products through the website to boost sales which helped for a while, but we had simply saturated the market. Our plan was to enter the US with our star product and follow the same, slightly modified, plan as before. We setup a 1-866 number and had the calls forwarded to our UK offices. We contracted a fulfillment company, shipped over a few thousand textbooks, had a mailing printed and mailed, then sat by the phones and waited. Needless to say, it didn't work. We tried a few other things, at home and in the US, but nothing helped. We expanded in the good times, moving into bigger offices, taking on staff to do administrative and dispatch work, but now cashflow was becoming a problem and things got tougher. We did the only thing we could and scaled things right back, the offices went, the admin staff went, I stopped taking a wage and started working from home. Nothing helped. The business was wound up about about 2 years ago. In the end it turned out that the owner had built up considerable debt at the start of business and had not paid them off during good years, which left him in a difficult position when cashflow had started to dry up. I haven't been able to contact the owner since I found out. It took me a while to get back on my feet after that, but I'm now at University and doing a Computer Science degree. How do I show the experience I have without having to get into all the gory details of what happened?

    Read the article

  • Help with running crontab from root

    - by user242065
    Im using OSX and having trouble getting a cron job to run. I type the following: $ sudo -i $ crontab -e I then enter: * * * * * root ifconfig en0 down > /dev/null 0 19 * * * root ifconfig en0 down > /dev/null 0 7 * * * root ifconfig en0 up > /dev/null and no success, the first line is for testing. I want it to shut off my internet. The next two lines I plan to leave in, once I get this working. If I type this in to the terminal the internet goes off ifconfig en0 down Why is my cron job not shutting down the internet? FYI: This is a follow up question from http://stackoverflow.com/questions/3027362/how-can-i-write-a-cron-job-that-will-block-my-internet-from-7pm-to-7am-so-i-can most of the comments there are people making fun of me. And a few attempts to solve the problem with out cron jobs.

    Read the article

  • Android tethering question

    - by Solignis
    Hi there, I have a Motorola Droid 1 with Android 2.2 on it. I found something odd that I don't understand why it works. If I plug my phone into my home computer which is Windows 7 and I turn on the tethering ability, I am met with a Verizon captive portal and it tells me I need to pay for a tethering plan (which I don't have). Now the weird thing, if I plug the same phone into my Ubuntu 10.10 laptop and enable tethering it works and can get on the internet with no captive portal showing up. The only thing I could notice that was different about the connections was Windows connected to the phone with an NDIS driver and Linux connected to the phone with what I think is a raw device mapping. Would that have anything to do with it?

    Read the article

  • HP DL180 G6 P410 8x SATA 1TB, what is the optimal configuration?

    - by Oneiroi
    I have a HP DL180 G6 with a P410 raid controller. Presently this runs using 4x 1TB Samsung Spinpoint SATA drives, in a RAID10 configuration using default settings. I am about to add a backplane to increase the drive capacity from 4 to 12 drives, and I plan to install 4 more 1TB SATA Drives. The drives are matched and have close serial numbers (They arrived together in the Manufacturers pallet). Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s I will also be installing RHEL 6.1 x86_64. My question is what would be the optimal RAID settings (stripe etc.) for this configuration? To recap: 8x Model HD103UJ 1000GB/7200rpm/32M Rated for 3GB/s RAID 10 configuration. Thanks in advance. Update for role: Server is to become an iscsi target for an internal openstack deployment currently underway. (Glance) Will also provide virtualisation through KVM

    Read the article

  • Why do I get "Sequence contains no elements"?

    - by Gary McGill
    NOTE: see edits at bottom. I am an idiot. I had the following code to process set of tag names and identify/process new ones: IEnumberable<string> tagNames = GetTagNames(); List<Tag> allTags = GetAllTags(); var newTagNames = tagNames.Where(n => !allTags.Any(t => t.Name == n)); foreach (var tagName in newTagNames) { // ... } ...and this worked fine, except that it failed to deal with cases where there's a tag called "Foo" and the list contains "foo". In other words, it wasn't doing a case-insensitive comparison. I changed the test to use a case-insensitive comparison, as follows: var newTagNames = tagNames.Where(n => !allTags.Any(t => t.Name.Equals(n, StringComparison.InvariantCultureIgnoreCase))); ... and suddenly I get an exception thrown when the foreach runs (and calls MoveNext on) newTagNames. The exception says: Sequence has no elements I'm confused by this. Why would foreach insist on the sequence being non-empty? I'd expect to see that error if I was calling First(), but not when using foreach? EDIT: more info. This is getting weirder by the minute. Because my code is in an async method, and I'm superstitious, I decided that there was too much "distance" between the point at which the exception is raised, and the point at which it's caught and reported. So, I put a try/catch around the offending code, in the hope of verifying that the exception being thrown really was what I thought it was. So now I can step through in the debugger to the foreach line, I can verify that the sequence is empty, and I can step right up to the bit where the debugger highlights the word "in". One more step, and I'm in my exception handler. But, not the exception handler I just added, no! It lands in my outermost exception handler, without visiting my recently-added one! It doesn't match catch (Exception ex) and nor does it match a plain catch. (I did also put in a finally, and verified that it does visit that on the way out). I've always taken it on faith that an Exception handler such as those would catch any exception. I'm scared now. I need an adult. EDIT 2: OK, so um, false alarm... The exception was not being caught by my local try/catch simply because it was not being raised by the code I thought. As I said above, I watched the execution in the debugger jump from the "in" of the foreach straight to the outer exception handler, hence my (wrong) assumption that that was where the error lay. However, with the empty enumeration, that was simply the last statement executed within the function, and for some reason the debugger did not show me the step out of the function or the execution of the next statement at the point of call - which was in fact the one causing the error. Apologies to all those who responded, and if you would like to create an answer saying that I am an idoit, I will gladly accept it. That is, if I ever show my face on SO again...

    Read the article

  • How can I download django-1.2 and use it across multiple sites when the system default is 1.1?

    - by meder
    I'm on Debian Lenny and the latest backports django is 1.1.1 final. I don't want to use sid so I probably have to download django. I have my sites located at: /www/ and I plan on using mod_wsgi with Apache2 as a reverse proxy from nginx. Now that I downloaded pip and virtualenv through pip, can someone explain how I could get my /www/ sites which are yet to be made to all use django-1.2? Question 1.1: Where do you suggest I download django-1.2? I know you can store it anywhere but where would you store it? Question 1.2: After installing it how do you actually tie that django-1.2 instead of the system default django 1.2 to the reverse proxied Apache conf? I would prefer it if answers were more specific than vague and have examples of setups.

    Read the article

  • Shared SQL Server 2008

    - by nazaf
    Hi, I have a Windows Hyper VPS plan with 1024 MB of RAM. After installing SQL Server 2008 Express, my memory usage went up to 75% without running my site yet. I know that SQL Server consumes a lot of memory, so I decided to host my DB on a shared server. Which of the following is more scalable: install my DB on my VPS, or on a shared server ? If the latter, then can you recommend me a good shared server? Thanks.

    Read the article

  • Converting AVI to FLV in highest quality

    - by Josh
    I want to convert some of the AVI files I recorded (presentations) to FLV so I can host them on a website and offer them to my visitors. I have done this sort of thing before but the quality they came out as would not be good enough for what I plan on doing here. So does anyone have a link to a guide or any experience they can offer in converting AVI to FLV with minimal quality loss? A lot of my presentations are 720p aswell, so I'd want to keep the aspect ratio the same as the source video. Thanks in advance.

    Read the article

  • Considering building a new system; but undecided about chassis

    - by J.C. Bengtson
    I'm considering a new system build, and was thinking that this particular motherboard has features I need and like: http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=667651&pagenumber=1&RSort=1&csid=ITD&recordsPerPage=5&body=REVIEWS#CustomerReviewsBlock .. but am unsure which model chassis to pair it with. I'd strongly prefer something from Cooler Master, as I'm a fan of their products, but am having a hard time deciding, and also don't want to get into some odd situation where the board doesn't properly fit. I plan on having two optical drives (5.25"), two internal HDs (3.5"), and will likely go with an SLI setup of 2 or possibly even 3 cards, so I'd need a chassis that is roomy enough to accomodate all of that, as well as the motherboard itself. Based on the stock available at that same site, do you all have any suggestions? The larger, the better, as I hate having components crammed together. Your help is most appreciated!

    Read the article

  • Exposing Exchange 2010 OWA via Cisco ASA 5520

    - by Gir
    Has anyone experience with exposing the web access (OWA) of an Exchange 2010 server through a Cisco ASA (my goal is to something like a DMZ)? If so could you give me some advice? I know that Exchange doesn't support DMZ and that MS recommends using TMG. Still, I'd like to know if someone has managed this (I've tried and wasn't very successsful so far). Or would it be better to ditch (read: sell) the ASA and use a TMG server instead? We're completely on Windows Server 2008 R2 and some remaining 2003 server running mostly as file servers. We don't use the VPN features much at the moment but plan on doing that in the future, but OWA should be there if VPN is not possible from outside. Thank you very much!

    Read the article

  • Changing Recovery Model in Replicated Database

    - by Rob
    I now am the proud owner of two servers that replicate with each other. I had nothing to do with the install, but (of course), now i have to support the databases. Both databases are in the Simple recovery model, but the users want to ensure as little data loss as possible so I'm thinking that I should change the recovery model over to full and start doing transaction log backups. I wasn't planning on backing up the subscribing database, only the publisher. Is this the right plan? Do I need to switch both the Subscriber and and the publisher to Full, or can I leave the subscriber in Simple, but have the Publisher in Full? When I change the recovery model in one (or both) do the databases need to be offline? Thanks

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • Adding FK Index to existing table in Merge Replication Topology

    - by Refracted Paladin
    I have a table that has grown quite large that we are replicating to about 120 subscribers. A FK on that table does not have an index and when I ran an Execution Plan on a query that was causing issues it had this to say -- /* Missing Index Details from CaseNotesTimeoutQuerys.sql - mylocal\sqlexpress.MATRIX (WWCARES\pschaller (54)) The Query Processor estimates that implementing the following index could improve the query cost by 99.5556%. */ /* USE [MATRIX] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[tblCaseNotes] ([PersonID]) GO */ I would like to add this but I am afraid it will FORCE a reinitialization. Can anyone verify or validate my concerns? Does it even work that way or would I need to run the script on each subscriber? Any insight would be appreciated.

    Read the article

  • Eclipse (STS), Maven and maven-minify-plugin, can they work together?

    - by CodeReaper
    Hi, I am working on a project where I am in charge of html, css and javascript. I found this maven-minify-plugin that seemed to just what I wanted. Everything is good when I deploy using maven on the server, but when I am using Eclipse (STS, www.springsource.com/products/sts) to run the project on localhost no css nor js file is generated by the plugin. Does anyone have experience with this Maven plugin, so they can tell me if it should be possible or not run on localhost? Does anyone have knowledge of another plugin I can use to (combine and) minify javascript and css files when running on localhost in Eclipse and also when deploying using Maven? Any help appreciated... ----extra information---- I basicly just copied in what it said on the plugin webpage, so I have these bits in my pom.xml: .... <build> <plugins> .... <plugin> <groupId>com.samaxes.maven</groupId> <artifactId>maven-minify-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>default-minify</id> <phase>process-resources</phase> <configuration> <cssFiles> .... <param>forms.css</param> <param>jquery.droppy.css</param> <param>jquery.jgrowl.css</param> </cssFiles> <jsFiles> .... <param>jquery.droppy.js</param> <param>jquery.jgrowl.js</param> </jsFiles> <jsFinalFile>script.js</jsFinalFile> <linebreak>-1</linebreak> <nomunge>false</nomunge> <verbose>false</verbose> <preserveAllSemiColons>false</preserveAllSemiColons> <disableOptimizations>false</disableOptimizations> <bufferSize>4096</bufferSize> </configuration> <goals> <goal>minify</goal> </goals> </execution> </executions> </plugin> </plugins> </build> .... Should/Can I bind the plugin to a difference phase? I just use mvn clean package and move the snapshot into tomcat to deploy on the server. I am unsure on how to explain how I run the webapp on localhost, but here goes. I have a vanilia tomcat, that I defined as a server in Eclipse and then defined that the webapp should always build in that "server".

    Read the article

  • How to get both PIPESTATUS and output in bash script

    - by Mustafa Serdar Sanli
    I'm trying to get last modification date of a file with this command TM_LOCAL=`ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }'` TM_LOCAL has value like "2012-05-16 23:18" after execution of this line I'd also like to check PIPESTATUS to see if there was an error. For example if file does not exists, ls returns 2. But $? has value 0 as it has the return value of awk. If I run this command alone, I can check the return value of ls by looking at ${PIPESTATUS[0]} ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }' But $PIPESTATUS does not work as I've expected if I assign the output to a variable as in the first example. In this case, $PIPESTATUS array has only 1 element which is same as $? So, the question is, how can I get both $PIPESTATUS and assign the output to a variable at the same time?

    Read the article

  • Need advise on choosing aws EC2

    - by Mayank
    I'm planning to host a website where in the first phase I would target 30,000 users. It is in php and runs on Apache server. I'm assuming 8,000 users can be online in worst case scenario and 1000 of them will be uploading photographs. A photograph will be resized to around 1MB at client side and one HTTP request is uploading only one photograph. My plan: 2 Small EC2 instances to run Apache httpd 2 Small EC2 instances to DB (Postgresql). I to write data and other its read replica. EBS volumes for DBs Last, Amazon S3 for uploaded photographs. My question here Is Small EC2 instance more than what I require. I mean should I go for micro Is 8000 simultaneous user a right no. (to decide what EC2 instance to choose) for a new website Or should I go for Small instance so to make it capable of spikes

    Read the article

  • Episode by episode DVD Rip to AVI.

    - by ProfKaos
    I'm looking for a tool to rip DVD files to AVI files. VidCode looks cool, but it wants to convert the whole DVD (all files in the VIDEO_TS folder) to one AVI. I would like to pull each episode on the DVD to its own AVI. Is their a way to that with VidCode, or another ripper? My Plan B for VidCode is to manually separate files for each episode into their own VIDEO_TS folders. Is this possible? Is there an easy, lazy way to automate this?

    Read the article

  • Installing software after .wim image

    - by lross1309
    I have been looking all day into how to install software to a .wim image. I have a .wim image of a laptop that will be deplyed to 20 more. The image is pretty much good to go except I need to add a couple more drivers and a couple more applications. Is there a way to do this in the answer file? Do I have to deploy the image, install the drivers/application, sysprep, and recapture the image? Any information relating to this will be greatly appreciated. Leon ps I have only been using answer files and imagex. Would like to stick with this for the time being. I plan on using WDT and WDS when I have some time to learn about it.

    Read the article

  • How to create a Linux Media Server using Ubuntu?

    - by Thomas
    Hello all: I'm an intermediate Linux user and a relative beginner to servers. I would like some help finding resources on setting up a basic server. I have Googled, and am a member of the Ubuntu Forums, but just figure it can't hurt to ask the Stack Overflow community for help as well. I plan on installing on an old laptop (Lenovo Thinkpad R61i or Toshiba Satellite A105). I have downloaded the latest Ubuntu (9.10) but don't know how to do any of the configurations. I just want a server to store my files where I can access (download and/ or stream) from a browser. Any help you can give is greatly appreciated. Thanks! Thomas

    Read the article

  • Dump output from REPL

    - by Ankit Soni
    I'm writing SML programs, and I'd like a way to quickly see the output from running a program in the REPL without actually running the REPL (to quickly see if a program has syntax errors - I plan to use this as a make program for .sml files in vim to view the output inside vim).. Currently, I have this: sml file.sml | echo -e "\004" So it runs the program, and then echoes Ctrl-D to exit the REPL. The problem is that its too quick to send the Ctrl-D key, so there is no output. I tried this too: sml file.sml | sleep 2 ; echo -e "\004" But that isn't doing it either. Any ideas on how I can get a dump of the output from the REPL?

    Read the article

  • Spoof user agent for GoGo Inflight Internet?

    - by AndyL
    Is it possible to trick the GoGo Inflight WiFi on airlines into thinking that you have a mobile device instead of a laptop? It seems like most airlines that offer in flight wireless these days use GoGo. They offer different pricing for mobile and laptops. It seems like they are checking the browser's user agent. Out of curiosity, is it possible to use a Firefox extension like this one to spoof the user-agent and allow a laptop to access the internet under a GoGo mobile plan? How would GoGo handle something like an IMAP email client, like Thunderbird. Do IMAP clients have a user-agent field as well that would normally identify whether the mail client is running on a laptop or mobile device?

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

  • How to rebuild a Li Ion laptop battery?

    - by spoulson
    I have an aging Gateway NX560XL laptop. The battery is toast and a new one, even aftermarket, starts at $130. So, to experiment, I began tearing apart the old battery to see what can be done. I found it used 8 standard size 18650 Li Ion cells arranged two cells parallel then in series (like: ====). Some online shopping revealed ~$7-13/ea replacements depending on mAh output. My plan is to load test to determine the bad cells and replace only those, as I read that typically only 1 or 2 may be bad. I'm proficient with soldering, however these cells are attached with welded tabs. Some of them broke during disassembly and I'm not sure how to reattach them. What I found online are cells like these that have solder tabs pre-welded to the ends so I can solder wires onto. Is there any guide available that provides the instructions and parts to do this kind of rebuild?

    Read the article

  • database on SSD: data only or the DBM program too?

    - by simone
    I plan on moving the data I use for statistical analysis (100-ish Gb) onto an SSD. The data is either sqlite single-file db's, or postgresql-managed data. The SSD is 240 Gb, 550 MB/s read and 520 MB/s write. Should I reserve that space for the data only, or would it be a good idea to install the operating system (Mac OS X) and the application directory (Adobe Suite, Microsoft Office and the like) on the SSD too? And would it make a substantial speed difference whether I also install the postgresql binaries on the SSD? I have plenty of other space (another 300Gb hard-drive, and a 1Tb one). Don't know the features of the non-SSD drives, though they're our standard equipment on all Macs, and they're definitely OK. Thanks.

    Read the article

  • Is ODBC on Windows 2003 slower than on Windows 7?

    - by nbolton
    I am seeing some MSSQL 2005 performance issues, and I am trying to diagnose the cause. I am using SQL profiler to gather query execution times. Both the client (using ODBC), and the SQL server are running on Windows 2003. I am also using Windows 7 (client) with a different Windows 2003 server to compare results. Windows 7 client / Windows 2003 server: SQL management studio: 393ms Through ODBC: 215ms Windows 2003 client: SQL management studio: approx 155ms Through ODBC: 3145ms ... in both cases, I'm running SQL management studio on the client. To me, these figures suggest there's something wrong with the ODBC client on the Windows 2003 server. On Windows, I see that the ODBC "SQL Server" driver is version 6.01.7600.16385 but on Windows 2003, it is 2000.86.3959.00 (by default). Could this be the problem? Is it possible to update an ODBC driver?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >