Search Results

Search found 24211 results on 969 pages for 'shell command'.

Page 458/969 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • How do I install websocket module for Node.js on Debian VPS?

    - by Ollie Shaw
    I currently am renting a VPS from Dreamhost which runs Debian. I am still learning command line on this OS, but fast! I have successfully installed Node.js, now I want to install the websocket module found here: https://github.com/Worlize/WebSocket-Node From the root user, I have run the following command: npm install websocket The error thrown is: [websocket v1.0.7] Native code compile failed!! On Windows, native extensions require Visual Studio and Python. On Unix, native extensions require Python, make and a C++ compiler. Start npm with --websocket:verbose to show compilation output (if any). What commands should I issue to install this websocket module and its requirements? Thanks very much! Edit: When I run sudo apt-get install gcc make I get this message: Reading package lists... Done Building dependency tree Reading state information... Done gcc is already the newest version. gcc set to manually installed. make is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 44 not upgraded. And the same error when trying to install WebSocket.

    Read the article

  • What is a fast way to set debugging code at a given line in a function?

    - by Josh O'Brien
    Preamble: R's trace() is a powerful debugging tool, allowing users to "insert debugging code at chosen places in any function". Unfortunately, using it from the command-line can be fairly laborious. As an artificial example, let's say I want to insert debugging code that will report the between-tick interval calculated by pretty.default(). I'd like to insert the code immediately after the value of delta is calculated, about four lines up from the bottom of the function definition. (Type pretty.default to see where I mean.) To indicate that line, I need to find which step in the code it corresponds to. The answer turns out to be step list(c(12, 3, 3)), which I zero in on by running through the following steps: as.list(body(pretty.default)) as.list(as.list(body(pretty.default))[[12]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]])[[3]] I can then insert debugging code like this: trace(what = 'pretty.default', tracer = quote(cat("\nThe value of delta is: ", delta, "\n\n")), at = list(c(12,3,3))) ## Try it a <- pretty(c(1, 7843)) b <- pretty(c(2, 23)) ## Clean up untrace('pretty.default') Questions: So here are my questions: Is there a way to print out a function (or a parsed version of it) with the lines nicely labeled by the steps to which they belong? Alternatively, is there another easier way, from the command line, to quickly set debugging code for a specific line within a function? Addendum: I used the pretty.default() example because it is reasonably tame, but with real/interesting functions, repeatedly using as.list() quickly gets tiresome and distracting. Here's an example: as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(body(# model.frame.default))[[26]])[[3]])[[2]])[[4]])[[3]])[[4]])[[4]])[[4]])[[3]]

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • How can the Three-Phase Commit Protocol (3PC) guarantee atomicity?

    - by AndiDog
    I'm currently exploring worst case scenarios of atomic commit protocols like 2PC and 3PC and am stuck at the point that I can't find out why 3PC can guarantee atomicity. That is, how does it guarantee that if cohort A commits, cohort B also commits? Here's the simplified 3PC from the Wikipedia article: Now let's assume the following case: Two cohorts participate in the transaction (A and B) Both do their work, then vote for commit Coordinator now sends precommit messages... A receives the precommit message, acknowledges, and then goes offline for a long time B doesn't receive the precommit message (whatever the reason might be) and is thus still in "uncertain" state The results: Coordinator aborts the transaction because not all precommit messages were sent and acknowledged successfully A, who is in precommit state, is still offline, thus times out and commits B aborts in any case: He either stays offline and times out (causes abort) or comes online and receives the abort command from the coordinator And there you have it: One cohort committed, another aborted. The transaction is screwed. So what am I missing here? In my understanding, if the automatic commit on timeout (in precommit state) was replaced by infinitely waiting for a coordinator command, that case should work fine.

    Read the article

  • Regex: match a non nested code block

    - by Sylvanas Garde
    I am currently writing a small texteditor. With this texteditor users are able to create small scripts for a very simple scripting engine. For a better overview I want to highlight codeblocks with the same command like GoTo(x,y) or Draw(x,y). To achieve this I want to use Regular Expresions (I am already using it to highlight other things like variables) Here is my Expression (I know it's very ugly): /(?<!GoTo|Draw|Example)(^(?:GoTo|Draw|Example)\(.+\)*?$)+(?!GoTo|Draw|Example)/gm It matches the following: lala GoTo(5656) -> MATCH 1 sdsd GoTo(sdsd) --comment -> MATCH 2 GoTo(23329); -> MATCH 3 Test() GoTo(12) -> MATCH 4 LALA Draw(23) -> MATCH 5 Draw(24) -> MATCH 6 Draw(25) -> MATCH 7 But what I want to achieve is, that the complete "blocks" of the same command are matched. In this case Match 2 & 4 and Match 5 & 6 & 7 should be one match. Tested with http://regex101.com/, the programming lanuage is vb.net. Any advise would be very useful, Thanks in advance!

    Read the article

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • Why does sshd give a different identification when connecting through netcat?

    - by Robbie Mckennie
    I have been attempting to create a way to ssh into a machine hiding behind a firewall. I set up my ssh client with the option Proxycommand /usr/bin/ncat -l 2000, and then I connect it to sshd with ncat <client> 2000 -c "sshd -i" on the server. It works in that I can get a shell on the server, but the server sends a different key than when I use normal ssh. So the question is, why? Is the key different when sshd is called in this unusual way?

    Read the article

  • Change permission of files with the owner 'apache'

    - by Dotty
    Hay, i have some files on my server with the owner set to "apache", I'm not quite sure how this happened. Anyway, i need to change the permission of these files to 0777 so i can download/edit them. However i cannot. I'm using a 1and1 Linux server and use Plesk to administrate it. I have the ability to login via SSH. However, if i run chmod or chown i get a "permission denied" error, and if i try to sudo chmod or chown it says the command cannot be found. When i go to edit my domain details, i get this option Shell access to server with FTP user's credentials and have these options /bin/sh /bin/bash /sbin/nologin /bin/bash (chrooted) /bin/rbash Any idea's how i should go about changing the permissions or changing the owner? Thanks

    Read the article

  • How do I reset the $PATH variable on Mac OS X?

    - by Neil
    I've messed up my path variable, and now some apps that I run raise errors saying Command Not Found (error 127) for commands like 'date' and 'sleep'. These commands work fine when executed directly in the shell. I'm guessing this has something to do with a malformed $PATH variable, and need to know how to reset it. I've deleted the files ~/.bashrc , ~/.bash_profile, /etc/bash.bashrc, and ~/.bashrc and ~/.profile. What other files could hold my $PATH? Is there some simpler way to reset the Path than dig into the myriad files which could hold my path? Note, this path problem is only with my user. I made a test user on my system, and the path was fine, back to normal.

    Read the article

  • Login authentication vanished from MongoDB install

    - by Robert Oschler
    A few months ago I enabled password protection on my MongoDB install. Today I ran the Mongo client and forgot to use my login details. Instead of rejecting nearly everything I try to do from the shell, like it should, I had complete access to all the databases and collections. Fortunately this instance is only running a few test apps, so I quickly shutdown the MongoD instance until I figure this out. Has anybody ever seen this kind of behavior before and knows what is going on? The MongoD instance is running on a Linux VM hosted by Azure. The only thing I can think of is that perhaps Azure restored an old copy of the VM, but I received no E-mails to that effect and everything else on the server seems to be proper, including new daemon processes that I added after I enabled password protection on MongoD.

    Read the article

  • "Fuzzy" Display Problems on New OpenSUSE 11.3 Install

    - by Kryten
    Hi, I have a old Desktop that has a new installation of OpenSUSE 11.3 on it. To get straight to the point, whenever I try to start-up OpenSUSE, my display goes "fuzzy" (almost like a badly tuned TV) and sometimes (not always) my monitor goes black and says "Out of Range: 15.6kHz/49Hz": I assumed that this was a driver problem, so I download the drivers from nvidia and ran the shell script, completed and rebooted. Then OpenSUSE failed to boot with something like "nscd failed to start". At that point I decided to re-install and I did, but this time I tried un-checking "Automatic Configuration" to see if that helped. Got through the install and still have a problem. Unfortunately, my Linux troubleshooting skills are non-existent, so has anyone got any ideas on what could be the problem (is it the display driver?) and how I can fix the problem?

    Read the article

  • script to automatically test if a web site is available

    - by Xoundboy
    I'm a lone web developer with my own Centos VPS hosting a few small web sites for my clients. Today I discovered my httpd service had stopped (for no apparent reason - but that's another thread). I restarted it but now I need to find a way that I can be notified by email and/or SMS if it happens again - I don't like it when my client rings me to tell me their web site doesn't work! I know there are probably many different possibilities, including server monitoring software. I think all I really need is a script that I can run as a cron job from my dev host (which is permanently running in my office) that attempts to load a page from my production server and if it doesn't load within say 30 seconds then it sends me an email or SMS. I'm pretty rubbish at shell scripting, hence this question. Any suggestions would be gratefully appreciated, thanks to all you clever sysadmin guys and girls out there :)

    Read the article

  • Starting multiple applications in Ubuntu Unity

    - by Black
    I would like to start multiple GUI applications with a single script or command in Ubuntu 12. By now, I have a shell script that starts an application in the foreground and waits for the termination of the application afterwards starts several applications (like browser, mailer, IRC client) in the background The script is working, however all the applications are getting the same icon and are treated like different windows of one application, i.e. the script. Is there a way to start applications from a script, that makes Unity display the icons of the applications, e.g. the Thunderbird icon, instead of a single default icon for the script? The script looks like this: ! /bin/bash wait for termination... /usr/bin/libreoffice path/to/document in background /usr/bin/thunderbird & /usr/bin/pidgin &

    Read the article

  • Is there some way Linux editors can tell the programming language without the file extension?

    - by vfclists
    I am editing some scripts on Linux without the languages file extensions, and it seems that the editors, namely vi, nano and gedit are not applying syntax highlighting because the filenames don't use the language extensions. Is there some parameters to be passed or some setting that can enable them to recognize the language? Update: After some googling I realize that bash has that ability, at least to do some parsing or check the shebang at the top determine the language. By default Ubuntu does not install the complete vim package, so after installing it, the shell files are recognized. I don't know about nano or gedit, but vi and its graphical counterpart will do.

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • eclipse show errors but compiles when external makefile

    - by Anthony
    I have some c++ code using CImg and Eigen libraries. At the c++ code I define a plugin like this #define cimg_plugin1 "my_plugin.h" #include "CImg.h" The plugin contains many method definitions that are used at the c++ code. I also have a makefile that when called from the command line (./make), allows me to compile everything, and generates an executable. I want to import this code into a new Eclipse project, and I do it so: NewProjectC++ projectmakefile projectempty project unmark "Use default location", and select the folder containing my code at the filesystem ProjectpropertiesC/C++ Buildunmark "Use default build command" and set it to use my makefile Also in project propertiesC/C++ GeneralPaths and SymbolsAdd paths to folders containing Eigen and CImg Rebuild index Clean project Restart eclipse When I build the project, eclipse tells me I have more than 1000 errors in "my_plugin.h", but it is capable to generate the executable. Even though, I would like to get rid of this errors, because they are annoying. Also, if I want to open the declaration of CImg methods used at the plugin, I can't. I know it has been asked before, but any of the solutions I found were satisfactory for me (most of them enumerated at the previous list). The sources I visited are the following, and I would be really happy if you find and tell me others I didn't see. Eclipse shows errors but project compile fine eclipse C project shows errors (Symbol could not be resolved) but it compiles Eclipse CDT shows some errors, but the project is successfully built http://www.eclipse.org/forums/index.php/t/247954/

    Read the article

  • What does "Value does not fall within expected range" mean in runtime error?

    - by manuel
    Hi, I'm writing a plugin (dll file) using /clr and trying to implement speech recognition using .NET. But when I run it, I got a runtime error saying "Value does not fall within expected range", what does the message mean? public ref class Dialog : public System::Windows::Forms::Form { public: SpeechRecognitionEngine^ sre; private: System::Void btnSpeak_Click(System::Object^ sender, System::EventArgs^ e) { Initialize(); } protected: void Initialize() { //create the recognition engine sre = gcnew SpeechRecognitionEngine(); //set our recognition engine to use the default audio device sre->SetInputToDefaultAudioDevice(); //create a new GrammarBuilder to specify which commands we want to use GrammarBuilder^ grammarBuilder = gcnew GrammarBuilder(); //append all the choices we want for commands. //we want to be able to move, stop, quit the game, and check for the cake. grammarBuilder->Append(gcnew Choices("play", "stop")); //create the Grammar from th GrammarBuilder Grammar^ customGrammar = gcnew Grammar(grammarBuilder); //unload any grammars from the recognition engine sre->UnloadAllGrammars(); //load our new Grammar sre->LoadGrammar(customGrammar); //add an event handler so we get events whenever the engine recognizes spoken commands sre->SpeechRecognized += gcnew EventHandler<SpeechRecognizedEventArgs^> (this, &Dialog::sre_SpeechRecognized); //set the recognition engine to keep running after recognizing a command. //if we had used RecognizeMode.Single, the engine would quite listening after //the first recognized command. sre->RecognizeAsync(RecognizeMode::Multiple); //this->init(); } void sre_SpeechRecognized(Object^ sender, SpeechRecognizedEventArgs^ e) { //simple check to see what the result of the recognition was if (e->Result->Text == "play") { MessageBox(plugin.hwndParent, L"play", 0, 0); } if (e->Result->Text == "stop") { MessageBox(plugin.hwndParent, L"stop", 0, 0); } } };

    Read the article

  • Can the expect utility handle a case where the process it spawns also spawns a sub process?

    - by davidparks21
    I'm trying to use expect to handle rsync over an ssh shell, but it gets stuck. If I run my rsync command it works (simplified here): It prompts me for my password and copies files to the server: rsync -e ssh -<other_params> If I then enclose that in expect: expect -d -c "spawn rsync -e ssh -<other_params>" -c "expect password:" -c "send mypass\r" It does not execute properly, the program exists and no files are copied. Even the debug mode isn't giving many clues. My best guess is that rsync is spawning the ssh process, and the ssh process is what needs to be interacted with, but send is picking up the rsync process id and sending the input there. Any thoughts?

    Read the article

  • Unable to HTTP PUT with libcurl

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • mysql row locking via php

    - by deezee
    I am helping a friend with a web based form that is for their business. I am trying to get it ready to handle multiple users. I have set it up so that just before the record is displayed for editing I am locking the record with the following code. $query = "START TRANSACTION;"; mysql_query($query); $query = "SELECT field FROM table WHERE ID = \"$value\" FOR UPDATE;"; mysql_query($query); (okay that is greatly simplified but that is the essence of the mysql) It does not appear to be working. However, when I go directly to mysql from the command line, logging in with the same user and execute START TRANSACTION; SELECT field FROM table WHERE ID = "40" FOR UPDATE; I can effectively block the web form from accessing record "40" and get the timeout warning. I have tried using BEGIN instead of START TRANSACTION. I have tried doing SET AUTOCOMMIT=0 first and starting the transaction after locking but I cannot seem to lock the row from the PHP code. Since I can lock the row from the command line I do not think there is a problem with how the database is set up. I am really hoping that there is some simple something that I have missed in my reading. FYI, I am developing on XAMPP version 1.7.3 which has Apache 2.2.14, MySQL 5.1.41 and PHP 5.3.1. Thanks in advance. This is my first time posting but I have gleaned alot of knowledge from this site in the past.

    Read the article

  • gpg error - connection already closed?

    - by OopsForgotMyOtherUserName
    omg... hope someone can help me because I am so lost as to what to try next.... I don't know what is causing the error to happen, and I don't see how to figure it out... Keep going between the pgloader.conf examples and what I have, and I don't understand why I keep getting the 'connection already closed' error? The first few lines of my fr.conf is at the very end... I'd really appreciate / love some guidance here... Been trying to get this thing going all morning, and am even getting stuck just on this part... Running this command at the command line: /usr/bin/pgloader -c /var/mybin/pgconfs/fr.conf Yields this in the pgloader.log (with the process just hanging) more /tmp/pgloader.log 27-03-2010 12:22:53 pgloader INFO Logger initialized 27-03-2010 12:22:53 pgloader INFO Reformat path is ['/usr/share/python-support/pgloader/reformat'] 27-03-2010 12:22:53 pgloader INFO Will consider following sections: 27-03-2010 12:22:53 pgloader INFO fixed 27-03-2010 12:22:54 fixed INFO fixed processing 27-03-2010 12:22:54 pgloader INFO All threads are started, wait for them to terminate 27-03-2010 12:22:57 fixed ERROR connection already closed 27-03-2010 12:22:57 fixed INFO closing current database connection [pgsql] host = localhost port = 5432 base = frdb user = username pass = password [fixed] table = fr format = fixed filename = /var/www/fr.txt ...

    Read the article

  • Environment variable for volume names in Windows?

    - by Shinrai
    I'm trying to write a batch file that will look at the volume label of the current drive and report if it's not equivalent to a certain string. Is there a default variable in the shell for this? Can I define one? Am I SOL and I'll actually have to do some (shudder) programming? EDIT: If this is possible in PowerShell that would work fine. (For the curious, we ship our machines with software cloning as a rapid bootable backup solution since most of our customers are daytraders and aren't interested in RAID due to urgency of getting-the-hell-back-to-work-right-away if there's a software corruption problem, and we want to make it immediately obvious if they're booting to the backup drive unintentionally, like say the primary failed entirely. The hope was just to write a simple batch file that would autostart on boot and throw a warning in the event of a problem.)

    Read the article

  • JMeter CSV Data Set is corrupting Japanese strings stored as proper UTF-8, I get Question Marks instead

    - by Mark Bennett
    I read in search terms from a simple text file to send to a search engine. It works fine in English, but gives me ???? for any Japanese text. Text with mixed English and Japanese does show the English text, so I know it's reading it. What I'm seeing: Input text: Snow Leopard ??????????????? Turns into: Snow Leopard ??????????????? This is in my POST field of an HTTP. If I set JMeter to encode the data, it just puts in the percent sequence for question marks. Interesting note: In the example above there are 15 Japanese characters, and then 15 question marks, so at some point it's being seen as full characters and not just bytes. About the Data: The CSV file is very simple in structure. There's only one field / one column, which I name TERM, and later use as ${TERM} I don't really need full CSV because it's only one string per line. There's no commas or quotes. When I run the Unix "file" command on the file, it says UTF-8 text. I've also verified it in command line and graphical mode on two machines. JMeter CSV Dataset Config: Filename: japanese-searches.csv File encoding: UTF-8 (also tried without) Variable names: TERM Delimiter: , Allow Quoted Data: False (I also tried True, different, but still wrong) Recycle at EOF: True Stop at EOF: False Staring mode: All threads A few things I've tried: Tried Allow quoted Data. It changed to other strange characters. -Dfile.encoding=UTF-8 Tried encoding the POST, but it just turned into a bunch of %nn for question marks And I'm not sure how "debug" just after the each line of the CSV is read in. I think it's corrupted right away, but I'm not sure. If it's only mangled when I reference it, then instead of ${TERM} perhaps there's some other "to bytes" function call. I'll start checking into that. I haven't done anything with the JMeter functions yet.

    Read the article

  • Vim - show diff on commit in mercurial;

    - by JackLeo
    In my .hgrc I can provide an editor or a command to launch an editor with options on commit. I want to write a method or alias that launches $ hg ci, it would not only open up message in Vim, but also would split window and there print out $ hg diff. I know that I can give parameters to vim by using +{command} option. So launching $ vim "+vsplit" does the split but any other options goes to first opened window. So I assume i need a specific function, yet I have no experience in writing my own Vim scripts. The script should: Open new vertical split with empty buffer (with vnew possibly) In empty buffer launch :.!hg diff Set empty buffer file type as diff :set ft=diff I've written such function: function! HgCiDiff() vnew :.!hg diff set ft=diff endfunction And in .hgrc I've added option: editor = vim "+HgCiDiff()" It kind of works, but I would like that splited window would be in right side (now it opens up in left) and mercurial message would be focused window. Also :wq could be setted as temporary shortcut to :wq<CR>:q! (having an assumption that mercurial message is is focused). Any suggestions to make this a bit more useful and less chunky? UPDATE: I found vim split guide so changing vnew with rightbelow vnew opens up diff on the right side.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >