Search Results

Search found 85480 results on 3420 pages for 'change data capture'.

Page 620/3420 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • How do you use VIM to edit tabular data (tables)? Specifically, BIND (named) DNS db files.

    - by Richard Bronosky
    I'm usually a purist when it comes to vimming. I don't like remapping keys, or learning to rely on a bunch of plugins. I like to feel just as powerful on foreign boxen as I do on my own dev box. I do, however, believe in syntax files. Even though the solution may not be a syntax file (bindzone.vim is what I use), I want it bad enough to do whatever. I regularly view or edit tab (or comma, but that would be a bonus) delimited data. I hate having to set my tabstop to some ridiculous number in order to have everything line up. Example: The BIND zone files are ~40+,6,2,5,15+. So, even though I could view them on a single screen, if I set ts=40, I cannot. I have been searching for a "dynamic tab size" solution for years, but no luck. I hate that my only good way of editing or even visualizing tabular data is to scp it to my work station and open it in Open Office. There has to be a better way.

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • File permission woes on an Ubuntu ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu sudo:x:27:me www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo. So I chown -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com Edit : It appears I had some alias messing with my ls command. By calling \ls -l I can see that all my files are in the correct group.

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • How may I retrieve data from an Excel table based on a variable number of criteria?

    - by Eshwar
    I have the following salary data for example: Country State 2012 2013 -> 2027 ======= ===== ==== ==== China Other 1000 1100 China Shanghai 1310 1400 China Tianjin 1450 1500 India Orissa 1500 1600 So now in another Excel sheet I would want an answer to one of the following questions: What is the salary in Shanghai for 2013? (Answer would be 1400) What is the salary in Hubei province for 2012? (Since it is not listed, use "Other" - 1000) What is the average salary in China for 2013? (Answer would be 1450) What is the highest salary in China for 2012? (Answer is Tianjin) So as in the above order of priority, I would like those numbers in another Excel sheet using some form of query. I considered PivotTables but I was wondering if there is another much better more efficient way of doing this? I imagine SQL is suited for this but I am not clued up on that. Some Excel functionality is much rather preferred. Also suggestions on an appropriate format of data for such queries would be appreciated.

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • Associate email account with "Personal Folders" Outlook data file?

    - by TheLQ
    In the process of migrating email servers I've run into an interesting problem: In Outlook 2007 you have the default "Personal Folders" item. This contains the email for the account that was origionally setup with Outlook. My issue is that I have deleted the account associated with that and created an entirely new account. So now I have "Personal Folders" and "[email protected]". However I can't delete "Personal Folders". nor associate "[email protected]" with that PST file. Deleteting it in Outlook (Tools Account Settings Data Files) gave the error "The default data file cannot be removed, because it is your default delivery location. After you have selected a different default delivery location, your current file can be removed." Deleting the PST file itself (outlook.pst) made outlook demand where its default file . would be. So I selected my "[email protected]" PST file and restarted Outlook. Now "Personal Folders" is called "[email protected]", but I still have a duplicate account called this. Which is bad. Worse, my email is associated with the duplicate PST, not the default. How can I associate my email with my default PST or delete the default PST entirely? Luckily I have backu

    Read the article

  • How can I stop Excel from eating my delicious CSV files and excreting useless data?

    - by atroon
    I have a database which tracks sales of widgets by serial number. Users enter purchaser data and quantity, and scan each widget into a custom client program. They then finalize the order. This all works flawlessly. Some customers want an Excel-compatible spreadsheet of the widgets they have purchased. We generate this with a PHP script which queries the database and outputs the result as a CSV with the store name and associated data. This works perfectly well too. When opened in a text editor such as Notepad or vi, the file looks like this: "Account Number","Store Name","S1","S2","S3","Widget Type","Date" "4173","SpeedyCorp","268435459705526269","","268435459705526269","848 Model Widget","2011-01-17" As you can see, the serial numbers are present (in this case twice, not all secondary serials are the same) and are long strings of numbers. When this file is opened in Excel, the result becomes: Account Number Store Name S1 S2 S3 Widget Type Date 4173 SpeedyCorp 2.68435E+17 2.68435E+17 848 Model Widget 2011-01-17 As you may have observed, the serial numbers are enclosed by double quotes. Excel does not seem to respect text qualifiers in .csv files. When importing these files into Access, we have zero difficulty. When opening them as text, no trouble at all. But Excel, without fail, converts these files into useless garbage. Trying to instruct end users in the art of opening a CSV file with a non-default application is becoming, shall we say, tiresome. Is there hope? Is there a setting I've been unable to find? This seems to be the case with Excel 2003, 2007, and 2010.

    Read the article

  • Office 2010 Trust Center settings: How to enable data connections in the "old" way?

    - by GSerg
    We're planning an upgrade Office 2003 - 2010 and have identified a big problem. In Office 2003, if the workbook you're opening contains a query table that fetches data from a data source automatically (upon file open or in certain intervals), then a security dialog pops up - whether you want to allow that. If you say Yes, the queries will refresh automatically when they need to. If you say No, the queries will not refresh automatically, neither on file open nor on time intervals, but you will be able to refresh any of them manually at any time by right-clicking and selecting Refresh. There is also a registry parameter to say, Don't display that dialog, just allow the queries. This is exactly what we want. On users' computers we have the registry parameter applied, so the users never see any dialogs. On developers' computers the parameter is not applied, so every time a file is opened the developer decides whether to allow the auto-refreshing for the current session. Usually the answer is No, because for developing, it is essential to not have quieres refresh when they want to, but instead, refresh them when the developer wants. The problem is that in Office 2010 which we are testing we can't find a way to achieve this functionality: The allow/disallow messages are now grouped into one yellow button, that either allows everything or disallows everything (including, say, macros, if macro security is set to "Disable, but ask"). If you don't click the yellow Allow button, the queries are disabled completely, not just for automatic execution. You cannot right-click and refresh a particular query -- doing that would summon a security dialog prompting for enabling queries, and if you say Yes, all queries in the document will be enabled for auto-execution and will start executing immediately. This sort of ruins our development environment. Is there a way to get the trust thingies in Office 2010 to work in the same way as before? Is there a yet another registry parameter to say, Prompt for auto-refresh, but allow manual refresh even when auto-refresh is disabled?

    Read the article

  • How to import this data set into excel? (column headings on each row delimited by a colon)

    - by Anonymous
    I'm trying to import the following data set into Excel. I've had no luck with the text import wizard. I'd like Excel to make id, name, street, etc the column names and insert each record onto a new row. , id: sdfg:435-345, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info , id: sdfg:435-345f, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info Is there any easy way to do this with Excel? I'm struggling to think of a way to convert this to a conventional CSV easily. As far as I can think, I'd have to remove the labels from each line, enclose each line in quotes, then delimit them with commas. Obviously that's made a little more difficult to script though seeing as some fields (address, for instance) contain comma-delimited data. I'm not good with regex at all. What's the best way to tackle this?

    Read the article

  • Which would be more reliable for data archival - SD card or a generic USB thumbdrive?

    - by Visitor
    I've been thinking lately what should I preferably use for data storage and archival. I will say in advance that I do not use flash memory as the only storage media - I also keep my data on the hard drives and optical disks - flash memory is but one of the several backup solutions that duplicate each other. For the flash memory however I do have a choice - to use a generic USB thumbdrive or a SD card. Are there any indications that SD cards may be better and more reliable? From browsing people's review on the web I see that many complaints about USB sticks have to do with them completely failing, losing file system and stop being recognized by the OS. At the same time, most of the complaints for SD cards deal with just write speeds not holding up to the promise - failure reports are but a portions of those for the USB sticks. Are SD cards indeed more reliable? Am I also correct in my assumptions that SD cards use higher grade NAND chips than USB thumbdrives? At least, for class 10 cards, because the specification dictates the minimum performance and the manufacturers have to preselect better chips. While it is common for USB sticks to promise high speeds "up to XX MB/sec" but the reality is they very often deliver speeds 2-3 times less than promised. Do SD cards get better NAND chips and USB thumbdrives receive the discarded chips? Any thoughts would be appreciated.

    Read the article

  • How to remove all data that a website stores on a PC? [on hold]

    - by s.r.a
    I was a member in a computer forum website (sevenforums.com) for months. Two days ago I created a thread and many members participated in it, some of them asked me some irrelevant questions and I said “this question is irrelevant” and didn’t get them the answer. Thread finished and I could get the answer from another website and posted it to that website to arise members’ knowledge. Yesterday like every day I went to that website and faced a massage which was banning me, and my account was disabled. I shocked and I didn’t have even any option to appeal against that wrong decision. So I had to do something. And I did the following works: 1- I disconnected my Internet connection and cleared all the history data on the browser I use, the Google chrome. 2- I then ran “ccleaner” tool and marked almost all the options and then clicked on “run” button. Then it cleared all the data including the cookies. 3- I connected the machine (desktop) to the Internet and immediately changed my IP address. 4- I created a new Hotmail account and tried to register as a new member to that website (sevenforums.com). 5- I succeeded and my new account was enabled so I start to posting to that website. But unfortunately, after less than 1 minute I faced this message: “You are already banned”! My question is that, how they could know me again? How to create a new account without they know me? Thanks in advance.

    Read the article

  • Do email providers have to tell me which (inter)national agencies/institutes are requesting legal access to my account data?

    - by Juve
    I know this question is not technical, but i did not find the "stackoverflow for legal issues" and I guess all you super users out there might know the answer. Here is my (potential) problem: I have a free email account at a (inter)national email provider. I used the words "wikileaks" and "twitter" lately in my email. Some over-ambitious national security organization legally requests access to all accounts that behaved similarly. Q1: Can I request the who-, when-, and why-information related to this legal request from my provider? Does he have to tell me which (inter)national organizations (legally) requested my account data? Q2: Does the situation change if I live in Germany (and have a German provider)? I guess here are some German users. And I know that such a legal policy exists for our national credit rating agency. I can request who got access to my data, they have to tell me. Please answer only if you know a good answer, I don't want to start a long discussion on this none-technical question. Best regards, Juve

    Read the article

  • How do i modify the XSL to change the xml format.

    - by user323719
    In the below XSL. <xsl:param name="insert-file" as="document-node()" /> <xsl:template match="*"> <xsl:variable name="input">My text</xsl:variable> <xsl:variable name="Myxml" as="element()*"> <xsl:call-template name="populateTag"> <xsl:with-param name="nodeValue" select="$input"/> </xsl:call-template> </xsl:variable> <xsl:copy-of select="$Myxml"></xsl:copy-of> </xsl:template> <xsl:template name="populateTag"> <xsl:param name="nodeValue"/> <xsl:for-each select="$insert-file/insert-data/data"> <xsl:choose> <xsl:when test="@index = 1"> <a><xsl:value-of select="$nodeValue"></xsl:value-of></a> </xsl:when> </xsl:choose> </xsl:for-each> </xsl:template> I am getting the output as: <?xml version="1.0" encoding="UTF-8"? <aMy text</a <aMy text</a <aMy text</a <aMy text</a I want template "populateTag" to retun me the xml in the below format. How do i modify the template "populateTag" to achive the same. Expected output from template "populateTag": <?xml version="1.0" encoding="UTF-8"? <a<a<a<aMy text</a</a</a</a Please give your ideas.

    Read the article

  • Can't change pivot table's Access data source - bug in Excel 2000 SP3?

    - by Ron West
    I have a set of Excel 2000 SP3 worksheets that have Pivot Tables that get data from an Access 2000 SP3 database created by a contractor who left our company. Unfortunately, he did all his work on his private area on the company (Novell) network and now that he has left us, the drive spec has been deleted and is invalid. We were able to get the database files restored to our network area by our IT Service Desk people, but we now have to re-link everything to point to our group area instead of the now-nonexistent private area. If I follow the advice given elsewhere on this site (open wizard, click 'Back' to get to 'Step 2 of 3', click 'Get Data...' I get a message that the old filespec is an invalid path and I need to check that the path name is invalid and that I am connected to the server on which the file resides. I then click on OK and get a Login dialog with a 'Database...' button on the right. I click this and get a 'Select Database' dialog which allows me to choose the appropriate database in its correct new location. I then click OK, which takes me back to the 'Login' screen. I can confirm that it has accepted my new location by clicking on 'Database...' as before and the NEW location is still shown. So far so good - but if I then click on OK I get two unhelpful messages - first I get one saying that Excel 'Could not use '|'; file already in use.' - although no other files are in use. Clicking on OK takes me back to the 'Login' dialog. Clicking OK again gives me the same message as before telling me that the OLD filespec is invalid (as if I hadn't changed anything) - but clicking on the 'Database...' button shows that the correct (NEW) database location is still selected. Can anyone tell me a way of using VBA to change the link information without having to spend hours fighting the PivotTable Wizard - preferably similar to this way you update an Access Tabledef:- db.TableDefs(strLinkName).Connect = strNewLink db.TableDefs(strLinkName).RefreshLink Thanks!

    Read the article

  • curl_multi_exec stops if one url is 404, how can I change that?

    - by Rob
    Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions: 1: Why does it stop? That doesn't make sense to me. 2: How can I make it continue? EDIT: Here is my code: $SQL = mysql_query("SELECT url FROM shells") ; $mh = curl_multi_init(); $handles = array(); while($resultSet = mysql_fetch_array($SQL)){ //load the urls and send GET data $ch = curl_init($resultSet['url'] . $fullcurl); //Only load it for two seconds (Long enough to send the data) curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_multi_add_handle($mh, $ch); $handles[] = $ch; } // Create a status variable so we know when exec is done. $running = null; //execute the handles do { // Call exec. This call is non-blocking, meaning it works in the background. curl_multi_exec($mh,$running); // Sleep while it's executing. You could do other work here, if you have any. sleep(2); // Keep going until it's done. } while ($running > 0); // For loop to remove (close) the regular handles. foreach($handles as $ch) { // Remove the current array handle. curl_multi_remove_handle($mh, $ch); } // Close the multi handle curl_multi_close($mh);

    Read the article

  • Can i change the view without changing the controller?

    - by Ian Boyd
    Pretend1 there is a place to type in a name:     Name: __________________ When the text box changes, the value is absorbed into the controller, who stores it in data model. Business rules require that a name be entered: if there is no text entered the TextBox should be colored something in the view to indicate baddness; otherwise it can be whatever color the view likes. The TextBox contains a String, the controller handles a String, and the model stores a String. Now lets say i want to improve the view. There is a new kind of text box2 that can be fed not only string-based keyboard input, but also an image. The view (currently) knows how to determine if the image is in the proper format to perform the processing required to extract text out of it. If there is text, then that text can be fed to the controller, who feeds it to the data model. But if the image is invalid, e.g.3 wrong file format invalid dimensions invalid bit depth unhandled or unknown encoding format missing or incorrectly located registration marks contents not recognizable the view can show something to the user that the image is bad. But the telling the user that something is bad is supposed to be the job of the controller. i'm, of course, not going to re-write the controller to handle Image based text-input (e.g. image based names). a. the code is binary locked inside a GUI widget4 b. there other views besides this one, i'm not going to impose a particular view onto the controller c. i just don't wanna. If i have to change things outside of this UI improvement, then i'll just leave the UI unimproved5 So what's the thinking on having different views for the same Model and Controller? Nitpicker's Corner 1 contrived hypothetical example 2 e.g. bar code, g-mask, ocr 3 contrived hypothetical reasons 4 or hardware of a USB bar-code scanner 5 forcing the user to continue to use a DateTimePicker rather than a TextBox

    Read the article

  • What could trigger a change of http status to 500 on the client's end?

    - by VexedPanda
    We have a PHP web application that posts data to itself, and either displays an updated page based on that data, or redirects to another page. An example of this is a script with a paged list on it, where clicking on the Next link causes a post to the same page, which then returns an updated version of the page showing the new set of list items. One client is reporting that IE is displaying friendly error messages when the page updates itself instead of the correct behavior of displaying the updated page. Turning friendly error messages off "corrects" this problem, and displays the updated page normally, indicating no actual server error occurred. When testing from any location other than this client's our web app does not produce any http error statuses, and in this specific situation only produces 200 statuses. (According to Fiddler.) What could be interfering with the HTTP POST and changing the response's http status code to 500 (or another code that would trigger friendly errors in IE)? Are there certain proxies or other network tools that could be misconfigured or buggy in this manner? Is there any way we can alter our application (apart from avoiding posts to the same script, which is not feasible) to get around this misbehavior?

    Read the article

  • In Scrum, should a team remove points from (defect) stories that don't result in a code change?

    - by CanIgtAW00tW00t
    My work uses a Scrum-like process to manage projects. I say Scrum-like, because we call it Scrum, but our project managers exclude aspects of Scrum that are inconvenient (most notably customer interaction). One of the stories in our current sprint was to correct a defect. After spending almost an entire day working on the issue, I determined the issue was the result of a permissions issue, so I didn't end up modifying any code. Our Scrum master / project manager decided that no code change equals zero points. I know that Scrum points are supposed to measure size / complexity and not time, but our Scrum master invests a lot of time in preparing graphs and statistical information from past sprints (average velocity, average points completed, etc.) I've always been of the opinion that for statistics to be meaningful in any way, the data must be as accurate as possible. All of our data is fuzzy to begin with, because, from time to time, we're encouraged by the Scrum master to "adjust" our size / complexity estimates, both increasing and decreasing them. I'd like to hear some other developers / Scrum team members thoughts on the merits of statistics based on past sprints, and also whether they think it's appropriate to "adjust" size / complexity estimates in the middle of a sprint, or the remove all points from a story all together for situations similar to what I've just described.

    Read the article

  • How can I change the default startup directory for cmd.exe?

    - by Nano HE
    Hi. My Procedure last day as below Click Start, Run and type Regedit.exe Navigate to the following branch: HKEY_CURRENT_USER \ Software \ Microsoft \ Command Processor In the right-pane, double-click Autorun and set the startup folder path as its data, preceded by “CD /d “. If Autorun value is missing, you need to create one, of type REG_EXPAND_SZ or REG_SZ in the above location. Example: To set the startup directory to D:\learning\perl, set the Autorun value data to CD /d D:\learning\perl Then I clicked Start, run and type cmd. It successfully. I could do perl practice more conveniently now. But today, I find when I try to build my Visual Studio 2005 solution which included some Pre-build event Command like this: perl.exe MyAppVersion.pl perl.exe AttrScan.pl It doesn't work. Show error: can't find the path. I check the environment variable setting and find the variable-path and it's value-c:\perl\bin\; still exist. Finially, I try to removed the Regedit.exe configuration "Autorun" value and test again. The issue fixed. I only changed the default startup directory for cmd.exe command. Why the pre-build event perl command was impacted? (I am using winxp and activePerl 5.8)

    Read the article

  • How to convert html textfield/area data to server-side txt file? [closed]

    - by olijake
    How can I make a script that will convert the text/data in a html textfield/textarea and send it to the server, which then saves it as a .txt file for storage? NOTE: I am hosting a website(for testing purposes) using Apache 2.2 on a Windows 7 machine. I downloaded PHP version 5.4.7, but have not yet installed on my server yet (not sure if I will need it, but also not sure how to install it). 1st problem: Saving text to server Html page/section with title textfield, text textarea, and submit button. You would enter a title, the text/notes you need in the textfield, then press the submit button to have it store the text in the textarea, as a .txt file on the server called .txt. 2nd problem: Opening text from server Html with list of all txt files OR textfield for entering in title, then submit button to send the title of the requested .txt file to the server, which would then load it up on the page. Here is what I have so far: (let me know if there is something that I should change or if something just isn't correct in the index.html code I have right now.) <!DOCTYPE HTML> <html> <head> <title>Insert Title</title> <meta http-equiv="Content-Type" content="Text/HTML; charset=UTF-8"/> </head> <body> <form method="post" action="save.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <textarea rows="20" cols="100" id="text" name="text"></textarea><br/> <input type="submit" name="submit" value="Submit Text to Server"> </form><br/> <hr style="width: 100%; height: 4px;"><br/> <form method="post" action="open.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <input type="submit" name="submit" value="Submit Txt File Request"> </form><br/> <div>Opened text file displays here or goes on another page</div> </body> </html> I plan on using a server side language/script, but ANYTHING that gets the job done is fine. I already tried looking into using some ASP/jScript/PHP, but have had some trouble implementing it into my server. (ie: getting the modules loaded and telling the server what file types to parse.) I know this may be an extremely easy fix, but then in that case, hopefully you wouldn't mind helping me out a little :). If it turns out that this is MUCH more complicated than I expect, then feel free to let me know that, so I don't waste me time running in circles. I appreciate any help/assistance that you can provide, Thanks, Jake EDIT: Wrong Apache version. In response to the comments/closing of this thread: My question: "How exactly do I install the PHP module on the apache server? and is this even possible? and is this even recommended?" ^ In case I wasn't clear enough already To Clarify: I understand the basics of PHP, I just have trouble with INSTALLING PHP on the apache server. (I have used PHP before, but never successfully on apache (so far...)) For my script I wrote something similar to this already (using fopen() and a few other commands): <?php fopen("notes.txt", "r"); file_put_contents("notes.txt",teststring1); ?> I have used javascript for this task before also (although I prefer using PHP and server-side languages): <script language="javascript"> function WriteToFile(){ var fso = new ActiveXObject("Scripting.FileSystemObject"); var s = fso.CreateTextFile("C:\\NewFile.txt", true); var text=document.getElementById("TextArea1").innerText; s.WriteLine(text); s.WriteLine('***********************'); s.Close(); } </script>

    Read the article

  • Fitting it together, database, reporting, applications in C#

    - by alvonellos
    Introduction Preamble I was hesitant to post this, since it's an application whose intricate details are defined elsewhere, and answers may not be helpful to others. Within the past few weeks (I was actually going to write a blog post about this after I finished) I've discovered that the barrier I'm encountering is one that's actually quite common for newer developers. This question is not so much about a specific thing as it is about piecing those things together. I've searched the internet far and wide, and found many tutorials on how to create applications that are kind of similar to what I'm looking for. I've also looked at hiring another, more experienced, developer to help me along, but all I've gotten are unqualified candidates that don't have the experience necessary and won't take care of the client or project like I will. I'd rather have the project never transpire than to release a solution that is half-baked. I've asked professors at my school, but they've not turned up answers to my question. I'm an experienced developer, and I've written many applications that are -- very abstractly -- close to what I'm doing, but my experiences from those applications aren't giving me enough leverage to solve this particular problem. I just hope that posting this article isn't a mistake for me to write. Project Description I have a project I'm working on for a client that is a rewrite of an application, originally written in Foxpro 2.6 by someone before me, that performs some analysis (which, sadly, I'm not allowed to disclose as per of my employment contract) on financial data. One day, after a long talk between the client and I -- where he intimately described his frustrations with all the bugs I've been hacking out of this code for 6 months now -- he told me to just rewrite it and gave me a month to write a good 1/8 of this 65k LOC Foxpro monstrosity. this 65k line of code foxpro monstrosity. It'll take me a good 3 - 6 months to rewrite this software (I know things the original programmer did not, like inheritance) going as I am right now, but I'm quickly discovering that I'm going to need to use databases. Prior to this contract I didn't even know about foxpro, and so I've had to learn foxpro on the fly, write procedures and make modifications to the database. I've actually come to like it, and this project would be rewritten in Foxpro if it were still a supported language, because over the past few months, I've come to like the features of Foxpro that make it so easy to develop data-driven applications. I once perfomed an experiment, comparing C# to Foxpro. What took me 45 minutes in C# took me two in Foxpro, and I knew C# prior to Foxpro. I was hoping to leverage the power of C#, but it intimidates me that in foxpro, you can have one line of code and be using a database. Prior to this, I have never written any serious database development from scratch. All the applications that I've written are in a different league. They are either completely data-naive or data-naive enough that I can get away with not using a database through serialization or by designing algorithms that work with the data in a manner that is stateless, so there is no need to worry about databases. I've come to realize, very quickly, that serialization and my efficacy with data structures has been my crutch all these years that's prevented me from adventuring into databases, and has consequently hindered my success in real-world programming. Sure, I've written some database stuff in Perl and Python, and I've done forms and worked with relational databases and tables, I'm a wizard in Access and Excel (seriously) and can do just about anything, but it just feels unnatural writing SQL code in another language... I don't mind writing SQL, and I don't It's that bridge between the database and the program code that drives me absolutely bonkers. I hope I'm not the only one to think this, but it bothers me that I have to create statements like the following string sSql = "SELECT * from tablename" When there's really no reason for that kind of unchecked language binding between two languages and two API's. Don't get my wrong, SQL is great, but I don't like the idea that, when executing commands on a SQL database, that one must intermix database and application software, and there's no database independence, which means that different versions of different databases can break code. This isn't very nice. The nicest thing about Foxpro is the cohesiveness between programming language and database. It's so easy, and Foxpro makes it easy, because the tool just fits the task. I can see why so many developers have created a career with this language, because it lowered the barrier of entry to data-driven applications that so many businesses need. It was wonderful. For my purposes today, with the demands and need for community support, extensibility, and language features, Foxpro isn't a solution that I feel would be the right tool for the job. I'm also worried about working too heavy with the database, because I've seen data-driven .NET applications have issues with database caches, running out of memory, and objects in the database not being collected. (Memory leaks) And OH the queries. Which one, how, and why? There are a plethora of different ways that a database can be setup, I think I counted 5 or 6 different kinds of database applications alone that I can chose from. That is a great mountain for me to climb when I don't even know where to begin when it comes to writing data-driven applications. The problem isn't that I don't know SQL or that I don't know C#. I know both and have worked with both extensively. It's making them work together that's the problem, and it's something I've never done in C# before. Reports The client likes paper. The data needs to be printed out in a format that is extensible, layered, and easy to use. I have never done reporting before, and so this is a bit of a problem. From the data source comes crystal reports, and so there's a dependency on the database, from what I understand. Code reuse A large part of the design decision that I've gone through so far is to break the task of writing a piece of this software into routines and modular DLL's and so forth such that much of the code can be reused. For example, when I setup this database, I want to be able to reuse the same database code over and over again. I also want to make sure that when the day comes that another developer is here, that he/she will be able to pick up just where I left off. The quicker I develop these applications, the better off I am. Tasks & Goals In my project, I need to write routines that apply algorithms and look for predefined patterns in financial data. Additionally, I need to simulate trading based on predefined algorithms and data. Then I need to prepare reports on that data. Additionally, I need to have a way to change the code base for this application quickly and effectively, without hacking together some band-aid solution for a problem that really needs a trauma ward. Special Considerations The solution must be fast, run quickly on existing hardware, and not be too much of a pain to maintain and write. I understand that anything I write I'm married to -- I'm responsible for the things that I write because my reputation and livelihood is dependent on it. Do I really need a database? What about performance? Performance was such a big issue that I hand wrote a data structure that is capable of performing 2 billion operations, using a total of 4 gigs of memory in under 1/4 of a second using the standard core two duo processor. I could not find a similar, pre-written data structure in C# to perform this task. What setup do I use in terms of database? What about reporting? I'd prefer to have PDF's generated, but I'd like to be able to visually sketch those reports and then just have a ReportFactory of some sort, that when I pass some variables in, it just does that data. About Me I'm a lone developer for a small business in this area. This is the first time I've done this and I've never had the breadth and depth of my knowledge tested. I'm incredibly frustrated with this project because I feel incredibly overwhelmed with the task at hand. I'm looking for that entry level point where I can draw a line and say "this is what I need to do" Conclusion I may have not been clear enough on my post. I'm still new to this whole thing, and I've been doing my best to contribute back to the community that I've leached so much knowledge from. I'd be glad to edit my post and add more information if possible. I'm looking for a big-picture solution or design process that helps me get off the ground in this world of data-driven applications, because I have a feeling that it's going to be concentric to my entire career as a programmer for some time. Specifically, if you didn't get it from the rest of the post (I may not have been clear enough) I really need some guidance as to where to go in terms of the design decisions for this project. Some things that'll be useful will be a pro/con list for the different kinds of database projects available in VS2010. I've tried, but generating that list has been as hard as solving the problem itself... If you could walk a developer writing a data-driven application for the first time in C#, how would you do that? Where would you point them to?

    Read the article

  • How can I dynamically change auto complete entries in a C# combobox or textbox?

    - by Sam Hopkins
    I have a combobox in C# and I want to use auto complete suggestions with it, however I want to be able to change the auto complete entries as the user types, because the possible valid entries are far too numerous to populate the AutoCompleteStringCollection at startup. As an example, suppose I'm letting the user type in a name. I have a list of possible first names ("Joe", "John") and a list of surnames ("Bloggs", "Smith"), but if I have a thousand of each, then that would be a million possible strings - too many to put in the auto complete entries. So initially I want to have just the first names as suggestions ("Joe", "John") , and then once the user has typed the first name, ("Joe"), I want to remove the existing auto complete entries and replace them with a new set consisting of the chosen first name followed by the possible surnames ("Joe Bloggs", "Joe Smith"). In order to do this, I tried the following code: void InitializeComboBox() { ComboName.AutoCompleteMode = AutoCompleteMode.SuggestAppend; ComboName.AutoCompleteSource = AutoCompleteSource.CustomSource; ComboName.AutoCompleteCustomSource = new AutoCompleteStringCollection(); ComboName.TextChanged += new EventHandler( ComboName_TextChanged ); } void ComboName_TextChanged( object sender, EventArgs e ) { string text = this.ComboName.Text; string[] suggestions = GetNameSuggestions( text ); this.ComboQuery.AutoCompleteCustomSource.Clear(); this.ComboQuery.AutoCompleteCustomSource.AddRange( suggestions ); } However, this does not work properly. It seems that the call to Clear() causes the auto complete mechanism to "turn off" until the next character appears in the combo box, but of course when the next character appears the above code calls Clear() again, so the user never actually sees the auto complete functionality. It also causes the entire contents of the combo box to become selected, so between every keypress you have to deselect the existing text, which makes it unusable. If I remove the call to Clear() then the auto complete works, but it seems that then the AddRange() call has no effect, because the new suggestions that I add do not appear in the auto complete dropdown. I have been searching for a solution to this, and seen various things suggested, but I cannot get any of them to work - either the auto complete functionality appears disabled, or new strings do not appear. Here is a list of things I have tried: Calling BeginUpdate() before changing the strings and EndUpdate() afterwards. Calling Remove() on all the existing strings instead of Clear(). Clearing the text from the combobox while I update the strings, and adding it back afterwards. Setting the AutoCompleteMode to "None" while I change the strings, and setting it back to "SuggestAppend" afterwards. Hooking the TextUpdate or KeyPress event instead of TextChanged. Replacing the existing AutoCompleteCustomSource with a new AutoCompleteStringCollection each time. None of these helped, even in various combinations. Spence suggested that I try overriding the ComboBox function that gets the list of strings to use in auto complete. Using a reflector I found a couple of methods in the ComboBox class that look promising - GetStringsForAutoComplete() and SetAutoComplete(), but they are both private so I can't access them from a derived class. I couldn't take that any further. I tried replacing the ComboBox with a TextBox, because the auto complete interface is the same, and I found that the behaviour is slightly different. With the TextBox it appears to work better, in that the Append part of the auto complete works properly, but the Suggest part doesn't - the suggestion box briefly flashes to life but then immediately disappears. So I thought "Okay, I'll

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >