Search Results

Search found 24715 results on 989 pages for 'output parameters'.

Page 361/989 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N

    - by tucuxi
    The question gives all necessary data: what is an efficient algorithm to generate a sequence of K non-repeating integers within a given interval. The trivial algorithm (generating random numbers and, before adding them to the sequence, looking them up to see if they were already there) is very expensive if K is large and near enough to N. The algorithm provided here seems more complicated than necessary, and requires some implementation. I've just found another algorithm that seems to do the job fine, as long as you know all the relevant parameters, in a single pass.

    Read the article

  • echo -e acts differently when run in a script by root on ubuntu

    - by ekrub
    When running a bash script on ubuntu 9.10, I get different behavior from bash echo's "-e" option depending on whether or not I'm running as root. Consider this script: $ cat echo-test if [ "`whoami`" = "root" ]; then echo "Running as root" fi echo Testing /bin/echo -e /bin/echo -e "foo\nbar" echo Testing bash echo -e echo -e "foo\nbar" When run as non-root user, I see this output: $ ./echo-test Testing /bin/echo -e foo bar Testing bash echo -e foo bar When run as root, I see this output: $ sudo ./echo-test Running as root Testing /bin/echo -e foo bar Testing bash echo -e -e foo bar Notice the "-e" being echoed in the last case ("-e foo" instead of "foo" on the second-to-last line). When running a script as root, the echo command runs as if "-e" was given and, if -e is given, the option itself is echoed. I can understand some subtle differences in behavior between /bin/echo and bash echo, but I would expect bash echo to behave the same no matter which user invokes it. Anyone know why this is the case? Is this a bug in bash echo? FYI -- I'm running GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)

    Read the article

  • asp mvc: how to pass parameter to controller using jquery api?

    - by Grayson Mitchell
    I am following the following tutorial (http://www.highoncoding.com/Articles/642_Creating_a_Stock_Widget_in_ASP_NET_MVC_Application.aspx) on using ajax to render a partial form , but in this example parameters are not passed, and I have not been able to work out how to do it... This code works with no parameter function GetDetails() { $("#divDetails").load('Details'); } This is my attempt to add a parameter, but does not work (cant find action) function GetDetails() { $("#divDetails").load('Details?Id=20'); }

    Read the article

  • Sending Mail using PHP on IIS 7 - Windows 2008 Server

    - by Roman
    Hi, I'm having trouble sending mail using PHP mail() on IIS 7 using Windows 2008 Server. The server is dedicated, thus I have full control over my machine. php.ini looks fine - ([mail function] is configured) I don't get any error from mail() (with the right parameters of course) btw - I got ASP and ASP.NET sending mails without any problems. Would be very gratefully for help Regards, Roman

    Read the article

  • C# iterator is executed twice when composing two IEnumerable methods

    - by achristoph
    I just started learning about C# iterator but got confused with the flow of the program after reading the output of the program. The foreach with uniqueVals seems to be executed twice. My understanding is that the first few lines up to the line before "Nums in Square: 3" should not be there. Can anyone help to explain why this happens? The output is: Unique: 1 Adding to uniqueVals: 1 Unique: 2 Adding to uniqueVals: 2 Unique: 2 Unique: 3 Adding to uniqueVals: 3 Nums in Square: 3 Unique: 1 Adding to uniqueVals: 1 Square: 1 Number returned from Unique: 1 Unique: 2 Adding to uniqueVals: 2 Square: 2 Number returned from Unique: 4 Unique: 2 Unique: 3 Adding to uniqueVals: 3 Square: 3 Number returned from Unique: 9 static class Program { public static IEnumerable<T> Unique<T>(IEnumerable<T> sequence) { Dictionary<T, T> uniqueVals = new Dictionary<T, T>(); foreach (T item in sequence) { Console.WriteLine("Unique: {0}", item); if (!uniqueVals.ContainsKey(item)) { Console.WriteLine("Adding to uniqueVals: {0}", item); uniqueVals.Add(item, item); yield return item; Console.WriteLine("After Unique yield: {0}", item); } } } public static IEnumerable<int> Square(IEnumerable<int> nums) { Console.WriteLine("Nums in Square: {0}", nums.Count()); foreach (int num in nums) { Console.WriteLine("Square: {0}", num); yield return num * num; Console.WriteLine("After Square yield: {0}", num); } } static void Main(string[] args) { var nums = new int[] { 1, 2, 2, 3 }; foreach (int num in Square(Unique(nums))) Console.WriteLine("Number returned from Unique: {0}", num); Console.Read(); } }

    Read the article

  • How to properly translate the "var" result of a lambda expression to a concrete type?

    - by CrimsonX
    So I'm trying to learn more about lambda expressions. I read this question on stackoverflow, concurred with the chosen answer, and have attempted to implement the algorithm using a console app in C# using a simple LINQ expression. My question is: how do I translate the "var result" of the lambda expression into a usable object that I can then print the result? I would also appreciate an in-depth explanation of what is happening when I declare the outer => outer.Value.Frequency (I've read numerous explanations of lambda expressions but additional clarification would help) C# //Input : {5, 13, 6, 5, 13, 7, 8, 6, 5} //Output : {5, 5, 5, 13, 13, 6, 6, 7, 8} //The question is to arrange the numbers in the array in decreasing order of their frequency, preserving the order of their occurrence. //If there is a tie, like in this example between 13 and 6, then the number occurring first in the input array would come first in the output array. List<int> input = new List<int>(); input.Add(5); input.Add(13); input.Add(6); input.Add(5); input.Add(13); input.Add(7); input.Add(8); input.Add(6); input.Add(5); Dictionary<int, FrequencyAndValue> dictionary = new Dictionary<int, FrequencyAndValue>(); foreach (int number in input) { if (!dictionary.ContainsKey(number)) { dictionary.Add(number, new FrequencyAndValue(1, number) ); } else { dictionary[number].Frequency++; } } var result = dictionary.OrderByDescending(outer => outer.Value.Frequency); // How to translate the result into something I can print??

    Read the article

  • How to write rule for ICU genrb and pkgdata for boost-build?

    - by sandy
    jamroot.jam rule genrb ( sources + : requirements * ) # create *.res files in binary directory { local result ; for local r in $(sources) { res $(r:B) : $(r) ; } } res.jam type.register RES : res ; type.register TXT : txt ; generators.register-standard res.resource : TXT : RES ; actions resource { $(icu_home)\bin64\genrb "-d$(<:D)" "$()" } I need to run pkgdata with parameters: pkgdata [-options] [-] [packageFile] packageFile is a text file containing the list of res-files to package.

    Read the article

  • Consecutive build of VS2005 and VS2008 C++ projects causes LNK1104 error

    - by TestAccount
    I have VS2005 and VS2008 installed on the same machine. I also have a common codebase that I build using both '05 and '08. For this purpose, I have 2 VC projects.. A '08 project called XYZ_2008.vcproj and a '05 project called XYZ_2005.vcproj, and the corresponding 2 slns as well. Both projects output dlls, libs and pdbs to the same output directory (all with appropriate _2005 and _2008 suffixes). Assuming that I am starting from a clean state, I first open XYZ_2005.sln (containing XYZ_2005.vcproj) in VS2005 and build it successfully. Then I close VS2005. Next, I open XYZ_2008.sln (containing XYZ_2008.vcproj) and build (not rebuild) it. At this point, I get an error saying: LINK : fatal error LNK1104: cannot open file 'mfc80u.lib' If now I rebuild the '08 solution, the error goes away and the build succeeds. The build also succeeds if I directly do a rebuild instead of a build for the '08 sln. In spite of everything being separate, the VS08 build seems to be picking up a MFC8 file (from VS05) instead of a MFC9 file. Can somebody please help out with this issue? Thanks in advance!

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • Passing data between Drupal module callback, preprocess and template

    - by rob5408
    I've create a module called finder that I want to take parameters from a url, crunch them and then display results via a tpl file. here's the relevant functions... function finder_menu() { $items = array(); $items['finder'] = array( 'page callback' => 'finder_view', 'access callback' => TRUE, ); return $items; } function finder_theme($existing, $type, $theme, $path) { return array( 'finder_view' => array( 'variables' => array('providers' => null), 'template' => 'results', ), ); } function finder_preprocess_finder_view(&$variables) { // put my data into $variables } function finder_view($zipcode = null) { // Get Providers from Zipcode return theme('finder_view', $providers); } Now I know finder_view is being called. I also know finder_preprocess_finder_view is being called. Finally, I know that result.tpl.php is being used to output. But I cannot wrap my head around how to do meaningful work in the callback, somehow make that data available in the preprocessor to add to "variables" so that i can access in the tpl file. in a situation where you are using a tpl file is the callback even useful for anything? I've done this in the past where the callback does all the work and passes to a theming function, but i want to use a file for output instead this time. Thanks...

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • Can a page opt out of IIS 7 compression?

    - by Glen Little
    My pages are automatically being compressed by IIS7 with GZIP. That is great... but, for one particular page, I need to stream it to the user, using Response.Flush() when needed. But when the output is being compressed, the IIS server seems to collect all my output until the page is done before compressing and sending it to the client. That nullifies my attempt to Flush the content out to the user. Is there a way that I can have this one page opt out of the compression? One possible option I've determined that if I manually set the content type to one that does not match the IIS configuration at c:\windows\system32\inetsrv\config\applicationhost.config, then IIS will not compress it. Eg. Response.ContentType = "x-text/html". This works okay with IE8, as it falls back to display the HTML. But Firefox will ask the user what to do with the unknown file type. This could work, if there was another Mime Type I could use that browsers would accept as HTML, that is not matched in the applicationhost.config. For reference, these are the mime types that will be compressed: <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> Others options? Are there other options to opt out of compression?

    Read the article

  • Replacing a word in a text file with a value using python

    - by Jamde Jam
    I have been trying to replace a word in a text file with a value (say 1), but my outfile is blank.I am new to python (its only been a month since I have been learning it). My file is relatively large, but I just want to replace a word with the value 1 for now. Here is a segment of what the file looks like: NAME SECOND_1 ATOM 1 6 0 0 0 # ORB 1 ATOM 2 2 0 12/24 0 # ORB 2 ATOM 3 2 12/24 0 0 # ORB 2 ATOM 4 2 0 0 4/24 # ORB 3 ATOM 5 2 0 0 20/24 # ORB 3 ATOM 6 2 0 0 8/24 # ORB 3 ATOM 7 2 0 0 16/24 # ORB 3 ATOM 8 6 0 0 12/24 # ORB 1 ATOM 9 2 12/24 0 12/24 # ORB 2 ATOM 10 2 0 12/24 12/24 # ORB 2 #1 #2 #3 I want to first replace the word ATOM with the value 1. Next I want to replace #ORB with a space. Here is what I am trying thus far. input = open('SECOND_orbitsJ22.txt','r') output=open('SECOND_orbitsJ22_out.txt','w') for line in input: word=line.split(',') if(word[0]=='ATOM'): word[0]='1' output.write(','.join(word)) Can anyone offer any suggestions or help? Thanks so much.

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • optimizing any OS for maximum informix client/server performance

    - by Frank Developer
    Is there any Informix documentation for optimizing any operating system where an ifx engine is running? For example, in Linux, strip-down to a bare minimum all unnecessary binaries, daemons, utilities, tune kernel parameters, optimize raw and cooked devices (hdparm), place swap space on beginning tracks of a disk, etc. Someday, maybe, Informix can create its own proprietary and dedicated PICK-like O/S to provide the most optimized environment for a standalone ifx server? The general idea is for the OS where ifx sits on have the smallest footprint and lowest overhead impact.

    Read the article

  • C system calls open / read / write / close problem.

    - by Andrei Ciobanu
    Hello, given the following code (it's supposed to write "hellowolrd" in a "helloworld" file, and then read the text): #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #define FNAME "helloworld" int main(){ int filedes, nbytes; char buf[128]; /* Creates a file */ if((filedes=open(FNAME, O_CREAT | O_EXCL | O_WRONLY | O_APPEND, S_IRUSR | S_IWUSR)) == -1){ write(2, "Error1\n", 7); } /* Writes hellow world to file */ if(write(filedes, FNAME, 10) != 10) write(2, "Error2\n", 7); /* Close file */ close(filedes); if((filedes = open(FNAME, O_RDONLY))==-1) write(2, "Error3\n", 7); /* Prints file contents on screen */ if((nbytes=read(filedes, buf, 128)) == -1) write(2, "Error4\n", 7); if(write(1, buf, nbytes) != nbytes) write(2, "Error5\n", 7); /* Close rile afte read */ close(filedes); return (0); } The first time i run the program, the output is: helloworld After that every time I to run the program, the output is: Error1 Error2 helloworld I don't understand why the text isn't appended, as I've specified the O_APPEND file. Is it because I've included O_CREAT ? It the file is already created, shouldn't O_CREAT be ignored ?

    Read the article

  • Ruby Mechanize - Basic Get Failing

    - by hutch
    a = WWW::Mechanize.new { |agent| agent.user_agent_alias = 'Mac Safari' agent.history.max_size=0 } page = a.get('http://livingsocial.com/deals?preferred_city=18') Trying a very basic GET request using mechanize but get a 500, yet when I CURL I have no problems. Is there a problem with including parameters in a get() call? I know I am missing something simple

    Read the article

  • window.location subject to querystring limitation

    - by rod
    Edit: Thanks all for the help, rod. Hi All, $('#button1').click(function(){ window.location = "/Home/GetCustomers?" + $('#myForm').serialize(); }); Is using window.location subject to querystring size limitation? For instance, if my form has many parameters to serialize? Thanks, rodchar

    Read the article

  • Common Lisp condition system for transfer of control

    - by Ken
    I'll admit right up front that the following is a pretty terrible description of what I want to do. Apologies in advance. Please ask questions to help me explain. :-) I've written ETLs in other languages that consist of individual operations that look something like: // in class CountOperation IEnumerable<Row> Execute(IEnumerable<Row> rows) { var count = 0; foreach (var row in rows) { row["record number"] = count++; yield return row; } } Then you string a number of these operations together, and call The Dispatcher, which is responsible for calling Operations and pushing data between them. I'm trying to do something similar in Common Lisp, and I want to use the same basic structure, i.e., each operation is defined like a normal function that inputs a list and outputs a list, but lazily. I can define-condition a condition (have-value) to use for yield-like behavior, and I can run it in a single loop, and it works great. I'm defining the operations the same way, looping through the inputs: (defun count-records (rows) (loop for count from 0 for row in rows do (signal 'have-value :value `(:count ,count @,row)))) The trouble is if I want to string together several operations, and run them. My first attempt at writing a dispatcher for these looks something like: (let ((next-op ...)) ;; pick an op from the set of all ops (loop (handler-bind ((have-value (...))) ;; records output from operation (setq next-op ...) ;; pick a new next-op (call next-op))) But restarts have only dynamic extent: each operation will have the same restart names. The restart isn't a Lisp object I can store, to store the state of a function: it's something you call by name (symbol) inside the handler block, not a continuation you can store for later use. Is it possible to do something like I want here? Or am I better off just making each operation function explicitly look at its input queue, and explicitly place values on the output queue?

    Read the article

  • Using data from a dataset in styles with Mapnik

    - by Alex King
    I've setup Mapnik to connect to a PostGIS database, and display geometry. I'd like to have a column in my database called opacity, and use it as the opacity for that row of geometry when Mapnik renders it. So far, I've only found information on how to display text from the database, and how to use filters to display different styles when database values are within parameters. Nothing about how to use the values directly inside of a style or layer though - is this possible?

    Read the article

  • CGI Buffering issue

    - by Punit
    I have a server side C based CGI code as: cgiFormFileSize("UPDATEFILE", &size); //UPDATEFILE = file being uploaded cgiFormFileName("UPDATEFILE", file_name, 1024); cgiFormFileContentType("UPDATEFILE", mime_type, 1024); buffer = malloc(sizeof(char) * size); if (cgiFormFileOpen("UPDATEFILE", &file) != cgiFormSuccess) { exit(1); } output = fopen("/tmp/cgi.tar.gz", "w+"); printf("The size of file is: %d bytes", size); inc = size/(1024*100); while (cgiFormFileRead(file, b, sizeof(b), &got_count) == cgiFormSuccess) { fwrite(b,sizeof(char),got_count,output); i++; if(i == inc && j<=100) { ***inc_pb*** = j; i = 0; j++; // j is the progress bar increment value } } cgiFormFileClose(file); retval = system("mkdir /tmp/update-tmp;\ cd /tmp/update-tmp;\ tar -xzf ../cgi.tar.gz;\ bash -c /tmp/update-tmp/update.sh"); However, this doesn't work the way as is seen above. Instead of printing 1,2,...100 to progress_bar.txt one by one it prints at ONE GO, seems it buffers and then writes to the file. fflush() also didn't work. Any clue/suggestion would be really appreciated.

    Read the article

  • Projective transformation

    - by mcwehner
    Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation". I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me". An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >