Search Results

Search found 5262 results on 211 pages for 'at commands'.

Page 186/211 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Porting Perl to C++ `print "\x{2501}" x 12;`

    - by jippie
    I am porting a program from Perl to C++ as a learning objective. I arrived at a routine that draws a table with commands like the following: Perl: print "\x{2501}" x 12; And it draws 12 times a '?' ("box drawings heavy horizontal"). Now I figured out part of the problem already: Perl: \x{}, \x00 Hexadecimal escape sequence; C++: \unnnn To print a single Unicode character: C++: printf( "\u250f\n" ); But does C++ have a smart equivalent for the 'x' operator or would it come down to a for loop? UPDATE Let me include the full source code I am trying to compile with the proposed solution. The compiler does throw an errors: g++ -Wall -Werror project.cpp -o project project.cpp: In function ‘int main(int, char**)’: project.cpp:38:3: error: ‘string’ is not a member of ‘std’ project.cpp:38:15: error: expected ‘;’ before ‘s’ project.cpp:39:3: error: ‘cout’ is not a member of ‘std’ project.cpp:39:16: error: ‘s’ was not declared in this scope #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <string.h> int main ( int argc, char *argv[] ) { if ( argc != 2 ) { fprintf( stderr , "usage: %s matrix\n", argv[0] ); exit( 2 ); } else { //std::string s(12, "\u250f" ); std::string s(12, "u" ); std::cout << s; } }

    Read the article

  • How can I automate new system provisioning with scripts under Mac OS X 10.6?

    - by deeviate
    I've been working on this for days but simply cannot find the correct references to make it work. The idea is to have a script that will baseline newly purchased Macs that comes into the company with basic stuffs like set autologin to off, create a new admin user (for remote admins to access for support, set password to unlock screensaver and etc) . Sample list for baseline that admins have to do on each new machine: Click the Login Options button Set Automatic Login: OFF Check: Show the Restart, Sleep, and Shutdown buttons Uncheck: Show input menu in login window Uncheck: Show password hints Uncheck: Use voice over in the login window Check: Show fast user switching menu as Short Name (note: this is only part of a long list to do on each machine) I've managed to find some references to make some parts work. Like autologin can be unset with: defaults write /Library/Preferences/.GlobalPreferences com.apple.userspref.DisableAutoLogin -bool TRUE and I've kinda found ways to muscle in a new user creation (including prompts) with AppleScript and shell commands. But generally its tough finding ways to do somewhat simple things like turn on password to get out of screensaver or to allow fast user switching. References are either too limited or just no where to be seen (e.g. I can unset autologin via cli but the very next setting on the system preference "show restart, sleep and shutdown buttons" is somewhere else and I can't find any command line to make it set) Does anyone have any ideas on a list, document, reference or anything of where each setting on the system resides so that I can be pointed to make it work? or maybe sample scripts for the above example... My thanks for reading thus far—a huge thank you for whoever that has any info on the above.

    Read the article

  • Refactoring. Your way to reduce code complexity of big class with big methods

    - by Andrew Florko
    I have a legacy class that is rahter complex to maintain: class OldClass { method1(arg1, arg2) { ... 200 lines of code ... } method2(arg1) { ... 200 lines of code ... } ... method20(arg1, arg2, arg3) { ... 200 lines of code ... } } methods are huge, unstructured and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, whith one pulic method and several helpers. What will you suggest? Several ideas come to my mind: Add several private helper methods to each method and join them in #region (straight-forward refactoring) Use Command pattern (one command class per OldClass method in a separate file). Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands). ? Thank you in advance!

    Read the article

  • How can I filter then modify e-mails using IMAP?

    - by swolff1978
    I have asked this question in a different post here on SO: How can a read receipt be suppressed? I have been doing some research of my own to try and solve this problem and accessing the e-mail account via IMAP seems like it is going to be a good solution. I have successfully been able to access my own Inbox and mark messages as read with no issue. I have been asked to preform the same task on an Inbox that contains over 23,000 emails. I would like to run the test on a small amount of e-mails from that inbox before letting the whole 23,000 get it. Here is the code I have been running via telnet: . LOGIN [email protected] password . SELECT Inbox . STORE 1:* flags \Seen 'this line marks all the e-mails as read So my question is how can i execute that store command on a specific group of e-mails... say emails that are going to / coming from a specific account? Is there a way to like concatenate the commands? like a FETCH then the STORE? Or is there a better way to go about getting a collection of e-mails based on certain criteria and then modifying ONLY those e-mails that can be accomplished through IMAP?

    Read the article

  • Starting a process synchronously, and "streaming" the output

    - by Benjol
    I'm looking at trying to start a process from F#, wait till it's finished, but also read it's output progressively. Is this the right/best way to do it? (In my case I'm trying to execute git commands, but that is tangential to the question) let gitexecute (logger:string->unit) cmd = let procStartInfo = new ProcessStartInfo(@"C:\Program Files\Git\bin\git.exe", cmd) // Redirect to the Process.StandardOutput StreamReader. procStartInfo.RedirectStandardOutput <- true procStartInfo.UseShellExecute <- false; // Do not create the black window. procStartInfo.CreateNoWindow <- true; // Create a process, assign its ProcessStartInfo and start it let proc = new Process(); proc.StartInfo <- procStartInfo; proc.Start() |> ignore // Get the output into a string while not proc.StandardOutput.EndOfStream do proc.StandardOutput.ReadLine() |> logger What I don't understand is how the proc.Start() can return a boolean and also be asynchronous enough for me to get the output out of the while progressively. Unfortunately, I don't currently have a large enough repository - or slow enough machine, to be able to tell what order things are happening in...

    Read the article

  • Running a process at the Windows 7 Welcome Screen

    - by peelman
    So here's the scoop: I wrote a tiny C# app a while back that displays the hostname, ip address, imaged date, thaw status (we use DeepFreeze), current domain, and the current date/time, to display on the welcome screen of our Windows 7 lab machines. This was to replace our previous information block, which was set statically at startup and actually embedded text into the background, with something a little more dynamic and functional. The app uses a Timer to update the ip address, deepfreeze status, and clock every second, and it checks to see if a user has logged in and kills itself when it detects such a condition. If we just run it, via our startup script (set via group policy), it holds the script open and the machine never makes it to the login prompt. If we use something like the start or cmd commands to start it off under a separate shell/process, it runs until the startup script finishes, at which point Windows seems to clean up any and all child processes of the script. We're currently able to bypass that using psexec -s -d -i -x to fire it off, which lets it persist after the startup script is completed, but can be incredibly slow, adding anywhere between 5 seconds and over a minute to our startup time. We have experimented with using another C# app to start the process, via the Process class, using WMI Calls (Win32_Process and Win32_ProcessStartup) with various startup flags, etc, but all end with the same result of the script finishing and the info block process getting killed. I tinkered with rewriting the app as a service, but services were never designed to interact with the desktop, let alone the login window, and getting things operating in the right context never really seemed to work out. So for the question: Does anybody have a good way to accomplish this? Launch a task so that it would be independent of the startup script and run on top of the welcome screen?

    Read the article

  • How to parse a string (by a "new" markup) with R ?

    - by Tal Galili
    Hi all, I want to use R to do string parsing that (I think) is like a simplistic HTML parsing. For example, let's say we have the following two variables: Seq <- "GCCTCGATAGCTCAGTTGGGAGAGCGTACGACTGAAGATCGTAAGGtCACCAGTTCGATCCTGGTTCGGGGCA" Str <- ">>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<." Say that I want to parse "Seq" According to "Str", by using the legend here Seq: GCCTCGATAGCTCAGTTGGGAGAGCGTACGACTGAAGATCGTAAGGtCACCAGTTCGATCCTGGTTCGGGGCA Str: >>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<. | | | | | | | || | +-----+ +--------------+ +---------------+ +---------------++-----+ | Stem 1 Stem 2 Stem 3 | | | +----------------------------------------------------------------+ Stem 0 Assume that we always have 4 stems (0 to 3), but that the length of letters before and after each of them can very. The output should be something like the following list structure: list( "Stem 0 opening" = "GCCTCGA", "before Stem 1" = "TA", "Stem 1" = list(opening = "GCTC", inside = "AGTTGGGA", closing = "GAGC" ), "between Stem 1 and 2" = "G", "Stem 2" = list(opening = "TACGA", inside = "CTGAAGA", closing = "TCGTA" ), "between Stem 2 and 3" = "AGGtC", "Stem 3" = list(opening = "ACCAG", inside = "TTCGATC", closing = "CTGGT" ), "After Stem 3" = "", "Stem 0 closing" = "TCGGGGC" ) I don't have any experience with programming a parser, and would like advices as to what strategy to use when programming something like this (and any recommended R commands to use). What I was thinking of is to first get rid of the "Stem 0", then go through the inner string with a recursive function (let's call it "seperate.stem") that each time will split the string into: 1. before stem 2. opening stem 3. inside stem 4. closing stem 5. after stem Where the "after stem" will then be recursively entered into the same function ("seperate.stem") The thing is that I am not sure how to try and do this coding without using a loop. Any advices will be most welcomed.

    Read the article

  • Problem with configure script

    - by cube
    I am running into a problem with the ./configure script for ffmpeg. My linux environment uses busybox, which only allows for limited set of linux commands. One command which is used in the ffmpeg ./configure script is mktemp -u, the problem here is the busybox for linux does not recognize the -u switch as valid, so it complains about it and breaks the configure process. This is the relevant code in ./configure which uses the mktemp -u command: if ! check_cmd type mktemp; then # simple replacement for missing mktemp # NOT SAFE FOR GENERAL USE mktemp(){ echo "${2%XXX*}.${HOSTNAME}.${UID}.$$" } fi tmpfile(){ tmp=$(mktemp -u "${TMPDIR}/ffconf.XXXXXXXX")$2 && (set -C; exec > $tmp) 2>/dev/null || die "Unable to create temporary file in $TMPDIR." append TMPFILES $tmp eval $1=$tmp } I am not good with bash scripting at all, so I was wondering if anyone one had an idea on how I can force this configure script to not use mktemp -u and use the 'replacement' alternative option that is available in as per the snippet above. Thanks. btw... simply removing the -u switch does not work. Nor does replacing it with -t, or -p. I believe the mktemp has to be bypassed completely.

    Read the article

  • opengl: question about glutMainLoop()

    - by lego69
    can somebody explain how does glutMainLoop work? and second question, why glClearColor(0.0f, 0.0f, 1.0f, 1.0f); defined after glutDisplayFunc(RenderScene); cause firstly we call glClear(GL_COLOR_BUFFER_BIT); and only then define glClearColor(0.0f, 0.0f, 1.0f, 1.0f); int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800, 00); glutInitWindowPosition(300,50); glutCreateWindow("GLRect"); glutDisplayFunc(RenderScene); glutReshapeFunc(ChangeSize); glClearColor(0.0f, 0.0f, 1.0f, 1.0f); <-- glutMainLoop(); return 0; } void RenderScene(void) { // Clear the window with current clearing color glClear(GL_COLOR_BUFFER_BIT); // Set current drawing color to red // R G B glColor3f(1.0f, 0.0f, 1.0f); // Draw a filled rectangle with current color glRectf(0.0f, 0.0f, 50.0f, -50.0f); // Flush drawing commands glFlush(); }

    Read the article

  • make target is never determined up to date

    - by Michael
    Cygwin make always processing $(chrome_jar_file) target, after first successful build. So I never get up to date message and always see commands for $(chrome_jar_file) are executing. However it happens only on Windows 7. On Windows XP once it built and intact, no more builds. I narrowed down the issue to one prerequisite - $(jar_target_dir). Here is part of the code # The location where the JAR file will be created. jar_target_dir := $(build_dir)/chrome # The main chrome JAR file. chrome_jar_file := $(jar_target_dir)/$(extension_name).jar # The root of the JAR sources. jar_source_root := chrome # The sources for the JAR file. jar_sources := bla #... some files, doesn't matter jar_sources_no_dir := $(subst $(jar_source_root)/,,$(jar_sources)) $(chrome_jar_file): $(jar_sources) $(jar_target_dir) @echo "Creating chrome JAR file." @cd $(jar_source_root); $(ZIP) ../$(chrome_jar_file) $(jar_sources_no_dir) @echo "Creating chrome JAR file. Done!" $(jar_target_dir): $(build_dir) echo "Creating jar target dir..." if [ ! -x $(jar_target_dir) ]; \ then \ mkdir $(jar_target_dir); \ fi $(build_dir): @if [ ! -x $(build_dir) ]; \ then \ mkdir $(build_dir); \ fi so if I just remove $(jar_target_dir) from $(chrome_jar_file) rule, it works fine.

    Read the article

  • Python client / server question

    - by AustinM
    I'm working on a bit of a project in python. I have a client and a server. The server listens for connections and once a connection is received it waits for input from the client. The idea is that the client can connect to the server and execute system commands such as ls and cat. This is my server code: import sys, os, socket host = '' port = 50105 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host, port)) print("Server started on port: ", port) s.listen(5) print("Server listening\n") conn, addr = s.accept() print 'New connection from ', addr while (1): rc = conn.recv(5) pipe = os.popen(rc) rl = pipe.readlines() file = conn.makefile('w', 0) file.writelines(rl[:-1]) file.close() conn.close() And this is my client code: import sys, socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = 'localhost' port = input('Port: ') s.connect((host, port)) cmd = raw_input('$ ') s.send(cmd) file = s.makefile('r', 0) sys.stdout.writelines(file.readlines()) When I start the server I get the right output, saying the server is listening. But when I connect with my client and type a command the server exits with this error: Traceback (most recent call last): File "server.py", line 21, in <module> rc = conn.recv(2) File "/usr/lib/python2.6/socket.py", line 165, in _dummy raise error(EBADF, 'Bad file descriptor') socket.error: [Errno 9] Bad file descriptor On the client side, I get the output of ls but the server gets screwed up.

    Read the article

  • Getting started with MIT Proto

    - by Charles
    MIT Proto lacks a basic getting started guide. How do I find a shell that accepts commands like (def foo...) and proto -n 1000 -l -m ...? http://groups.csail.mit.edu/stpg/proto.html I can run in my bash shell: ./proto -n 1000 -s 0.1 -T -l "(red (gradient (= (mid) 0)))" I can't figure out how to run e.g. channel.proto: (def channel (src dst width) (let* ((d (distance src dst)) (trail (<= (+ (gradient src) (gradient dst)) (+ d 0.01))) ;; float error ;; (trail (= (+ (gradient src) (gradient dst)) d)) ) (dilate trail width))) ;; To see a channel calculated from geometric primitives, run: ;; proto -n 1000 -l -m -s 0.5 "(blue (channel (sense 1) (sense 2) 10))" ;; click on a device and hit 't' to set up the source, then click on ;; another device and hit 'y' to designate the destination. At first ;; every device will be blue, but then it should clear and you should ;; see a thick blue path connecting the two devices you selected. Thanks! P.S. Somebody please tag this mit-proto. I can't.

    Read the article

  • Best workflow with Git & Github

    - by Tom Schlick
    Hey guys, im looking for some advice on how to properly structure the workflow for my team with git & github. we are recent svn converts and its kind of confusing on how we should best setup our day-to-day workflow. Here is a little background, im comfortable with command line and my team is pretty new to it but can follow use commands. We all are working on the same project with 3 environments (development, staging, and production). We are a mix of developers & designers so some use the Git GUI and some command line. Our setup in svn went something like this. We had a branch for development, staging and production. When people were confident with code they would commit and then merge it into the staging. The server would update itself and on a release day (weekly) we would do a diff and push the changes to the production server. Now i setup those branches and got the process with the server running but its the actual workflow that is confusing the hell out of me. It seems like overkill that every time someone makes a change on a file they would create a new branch, commit, merge, and delete that branch... from what i have read they would be able to do it on a specific commit (using the hash), do i have that right? is this an acceptable way to go about things with git? any advice would be greatly appreciated.

    Read the article

  • Writing own Unix shell in C - Problems with PATH and execv

    - by user1287523
    I'm writing my own shell in C. It needs to be able to display the users current directory, execute commands based on the full path (must use execv), and allow the user to change the directory with cd. This IS homework. The teacher only gave us a basic primer on C and a very brief skeleton on how the program should work. Since I'm not one to give up easily I've been researching how to do this for three days, but now I'm stumped. This is what I have so far: Displays the user's username, computername, and current directory (defaults to home directory). Prompts the user for input, and gets the input Splits the user's input by " " into an array of arguments Splits the environment variable PATH by ":" into an array of tokens I'm not sure how to proceed from here. I know I've got to use the execv command but in my research on google I haven't really found an example I understand. For instance, if the command is bin/ls, how does execv know the display all files/folders from the home directory? How do I tell the system I changed the directory? I've been using this site a lot which has been helpful: http://linuxgazette.net/111/ramankutty.html but again, I'm stumped. Thanks for your help. Let me know if I should post some of my existing code, I'm wasn't sure if it was necessary though.

    Read the article

  • Grep without storing search to the "/ register in Vim

    - by Phro
    In my .vimrc I have a mapping that makes a line of text 'title capitalized': noremap <Leader>at :s/\v<(.)(\w{2,})/\u\1\L\2/g<CR> However, whenever I run this function, it highlights every word that is at least three characters long in my entire document. Of course I could get this behaviour to stop simply by appending :nohlsearch<CR> to the end of the mapping, but this is more of an awkward hack that still avoids a bigger problem: The last search has been replaced by \v<(.)(\w{2,}). Is there any way to use the search commands in Vim without storing the last search in the "/ register; a 'silent' search of sorts? That way, after running this title-making command, I can still use my previous search to navigate the document using n, N, etc. Edit Using @brettanomyces' answer, I found that simply setting the mapping: noremap <Leader>at :call setline(line('.'),substitute(getline('.'), '\v<(.)(\w{2,})', '\u\1\L\2', 'g'))<CR> will successfully perform the substitution without storing the searched text into the / register.

    Read the article

  • PHP transfer files from server to server in LAN

    - by cheapez
    So, I have 5-6 pages of requirements. I'm trying to build this application in PHP based on the requirements. I want to transfer files from one server to the other server in LAN, and then send a shell command to the other server to find out if the file has been transferred successfully. In php, I can transfer files using FTP, and send shell commands using SSH. Using the methods above, I will need to open connection to the server first, but I don't know the ftp server name, domain name, ip address, or anything like that. I only know the the server ID (I'm not sure what this ID is, but I guess it is like the computer's name). An example of the server ID is: "c23bap234" How do I open a connection with just that server ID? These servers are in the same building, have LAN connection, don't have connection to the outside world. These machines have PHP, Apache, ... installed. If my post doesn't make sense to you, it's because I'm a learner. I hope someone can help me on this. Thanks in advance.

    Read the article

  • C Map String to Function

    - by Scriptonaut
    So, I'm making a Unix minishell, and have come to a roadblock. I need to be able to execute built-in functions, so I made a function: int exec_if_built_in(char **args) It takes an array of strings(the first being the command, and the rest being arguments). For non built-in commands I simply use something like execvp, however I need to find a way to map the first string to a function. I was thinking of making two arrays, one of strings, and another with their corresponding function pointers. However, since many of these functions will be different(return and accept different things), this approach won't work. I also thought of making an array of structs with a name property and a function pointer property, however once again due to the varied nature of the functions I'll be using, this won't work. So, what's the best way to execute a function based on the input of a string? How do I map a string to a certain function? I'm not very familiar with function pointers so I may be missing something. Thank you guys for the help :)

    Read the article

  • Why am I needing to click twice on a WPF listbox item in order to fire a command?

    - by Donal
    Hi, I'm trying to make a standard WPF listbox be dynamically filled, and for each item in the list box to launch a command when clicked. Currently I have a working listbox, which can be filled and each item will fire the correct command, but in order to fire the command I have to click the list item twice. i.e, Click once to select the item, then click on the actual text to fire the command. As the list is dynamically created, I had to create a data template for the list items: <ListBox.ItemTemplate> <DataTemplate> <TextBlock Margin="4,2,4,2"> <Hyperlink TextDecorations="None" Command="MyCommands:CommandsRegistry.OpenPanel"> <TextBlock Text="{Binding}" Margin="4,2,4,2"/> </Hyperlink> </TextBlock> </DataTemplate> </ListBox.ItemTemplate> Basically, how do I remove the need to click twice? I have tried to use event triggers to fire the click event on the hyperlink element when the list box item is selected, but I can't get it to work. Or, is there a better approach to dynamically fill a listbox and attach commands to each list item? Thanks

    Read the article

  • XSLT, process elements one by one

    - by qui
    Hi I am quite weak at XSLT so this might seem obvious. Here is some sample XML <term> <name>cholecystocolonic fistula</name> <definition>blah blah</definition> <reference>cholecystocolostomy</reference> </term> And here is the XSLT I wrote a while ago to process it <xsl:template name="term"> { "dictitle": "<xsl:value-of select="name" disable-output-escaping="yes" />", "html": "<xsl:value-of select="definition" disable-output-escaping="yes"/>", "referece": "<xsl:value-of select="reference" disable-output-escaping="yes"/> } </xsl:template> Basically I am creating JSON from the XML. The requirements have now changed so that now the XML can have more than one definition tag and reference tag. They can appear in any order, i.e definition, reference, reference, definition, reference. How can I update the XSLT to accommodate this? Probably worth mentioning that because my XSLT processor is using .NET I can only use XSLT 1.0 commands. Many thanks!

    Read the article

  • How can I simplify this user interface?

    - by Bears will eat you
    I'm writing an internal-tools webapp; one of the central pages in this tool has a whole bunch of related commands the user can execute by clicking one of a number of buttons on the page, like this: Ideally, all of the buttons would fit on one line. Ordinarily I'd do this by changing each widget from a button with a (sometimes long) text label to a simple, compact icon - e.g. could be replaced by a familiar disk icon: Unfortunately, I don't think I can do this for every button on this particular page. Some of the command buttons just don't have good visual analogs - "VDS List". Or, if I needed to add another button in the future for some other kind of list, I'd need two icons that both communicate "list-ness" and which list. So, I'm still considering this option, but I don't love it. So it's come time for me to add yet another button to this section (don't you love internal tools?). There's not enough room on that single line to fit the new button. Aside from the icon solution I already mentioned, what would be a good* way to simplify/declutter/reduce or otherwise improve this UI? *As per Jakob Nielsen's article, I'd like to think that a dropdown menu is not the solution.

    Read the article

  • MySQL Query - WHERE and IF?

    - by Prash
    I'm not quite sure how to right this query. Basically, I'm going to have a table with two columns (OS and country_code) - more columns too, but those are the conditional ones. These will be either set to 0 for all, or specific ones, separated by commas. Now, what I'm trying achieve is pull data from the table if the OS and country_code = 0, or if they contain matching data (separated by commas). Then, I have a column for time. I want to select rows where the time is GREATER than the time column, unless the column time_t is set to false, in which case this shouldn't matter. I hope I explained it right? This is what I kind of have so far: $get = $db->prepare("SELECT * FROM commands WHERE country_code = 0 OR country_code LIKE :country_code AND OS = 0 OR OS LIKE :OS AND IF (time_t = 1, expiry > NOW()) "); $get->execute(array( ':country_code' => "%{$data['country_code']}%", ':OS' => "%{$data['OS']}%" ));

    Read the article

  • Please help me with a Power shell Script which rearranges Paths.

    - by Hamish Grubijan
    Hi, I have both Sybase and MSFT SQL Servers installed. There is a time when Sybase interferes with MS SQL because they have they have some overlapping commands. So, I need two scripts: A) When runs, script A backs up the current path, grabs all paths that contain sybase or SYBASE or SyBASE (you get the point) in them and move them all at the very end of the path, while preserving the order. B) When it runs, script B restores the path from back-up. Both script a and script b should affect the path immediately. So, if a.bat that calls patha.ps1, pathb.ps1 looks like so: @REM Old path here call patha.ps1 @REM At this point the effective path should be different. call pathb.ps1 @REM Effective old path again Please let me know if this does not make sense. I am not sure if call command is the best one to use. I have never used P.S. before. I can try to formulate the same thing in Python (I know S.O. users tend to ask for "What have you tried so far"). Well, at this point I am VERY slow at writing anything in Power Shell language. Please help.

    Read the article

  • Corrupt output with an HttpModule

    - by clementi
    I have an HttpModule that looks at the query string for a parameter called "cmd" and executes one of a small set of predefined commands that display server stats in XML. For example, http://server01?cmd=globalstats. Now, on rare occasions, like once out of hundreds of times, I will get corrupt output like this: <!-- the stats start displaying fine... --> <stats> <ServerName>SERVER01</ServerName> <StackName>Search</StackName> <TotalRequests>945</TotalRequests> <!-- ...until something has gone awry and now we're getting the markup of the home page! --> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> ...the rest of the home page markup... (Remove the comments in the example above.) I'm not all that familiar with HttpModules and the IIS pipeline, but could this be a threading problem? Or, what else?

    Read the article

  • Could git do not store history of specific folders when working with git-svn?

    - by Timofey Basanov
    In short: Is there a way to disable storing full history for specific folders in git-svn repo? We have pretty large SVN repo with big checkout. I would like to migrate it to Git for my local development, because Git speeds up update and status commands orders of magnitude. When I simply do git svn clone it creates very big repo. Big enough to be bigger then my whole HDD. The problem lies in binary directories for which history is too large. Latest binaries are required for proper local build, but history is not required at all for my development process. I will never change them myself. I would like to store only latest versions for specific folders, or may be a history, but for no more than a week. I could only found filter for git svn fetch, which excludes specific folders at all. This is not exactly what I need. It's OK with me to have Cron task which deletes history from specific folders, but I do not know how to make one. Also Cron does not solve problem of first git svn clone. P.S. SVN repository structure could not be changed by any means.

    Read the article

  • c unix- Select() send and receive with same socket descriptor

    - by RileyVanZeeland
    I am wanting to use select to receive and send on a client/server on the same socket descriptor (serverside). timestruct* myTime; sockfd = accept(listeningFd, 0, 0); while(1) FD_ZERO(&my_fd_set) maxFd = sockfd FD_ZERO(&my_fd_set); FD_SET(sockfd, &my_fd_set); select(maxFd+1, &my_fd_set, &my_fd_set, NULL, myTime); for (j=0; j<=maxFd; j++) if(FD_ISSET(j, &temp_fd_set)) if(j==sockfd) send() if(j==sockfd) recv() This is essentially what I want to do. Obviously this won't work because sockfd is going to be the same value for sending and receiving. Is there a way I can do this without using fork()?? Currently I have a blocking recv and send but the server could be required to recv multiple commands while another command is being processed to send back to the client. I am very knew to c and also 'select()'. Because select has the three fd_set options (read, write, execute) I thought maybe I could do this. Thank you.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >