Search Results

Search found 11812 results on 473 pages for 'word processing'.

Page 408/473 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • how do I zoom to normal text size in vs2008?

    - by gerryLowry
    I can not find any help on this issue in vs2008 help, Google, or SO. Scenario: I'm looking at a source file in vs2008 SP1; Windows 2003 Server SP2 Standard Edition, 1280x1024. The irrelevant name of this file is index.aspx. What is relevant is that the file has only 65 lines of code. The print is unreadably small--less than 4 point. It uses less than a third of the vs2008 text window vertically and less that a quarter of the vs2008 text window horizontally. It's not just index.aspx; e.g. another file with 142 lines only fills about 3/4 of the vs2008 text window vertically and less that a quarter of the vs2008 text window horizontally. Possible cause: Probably, but not certainly, I found the equivalent of zoom in/zoom out such as one finds in products like Microsoft Word. However, I've explored many vs2008 toolbars and other customization options and unfortunately I can not find out how to get myself out of this mess. Window, Reset Window Layout has no effect on the text size; my tiny text size did not change. QUESTION: how do I zoom vs2008 text size in and out and back to normal size? Thank you. Regards ~~ Gerry (Lowry)

    Read the article

  • Ruby : UTF-8 IO

    - by subtenante
    I use ruby 1.8.7. I try to parse some text files containing greek sentences, encoded in UTF-8. (I can't much paste here sample files, because they are subject to copyright. Really just some greek text encoded in UTF-8.) I want, for each file, to parse the file, extract all the words, and make a list of each new word found in this file. All that saved to one big index file. Here is my code : #!/usr/bin/ruby -KU def prepare_line(l) l.gsub(/^\s*[ST]\d+\s*:\s*|\s+$|\(\d+\)\s*/u, "") end def tokenize(l) l.split /['·.;!:\s]+/u end $dict = {} $cpt = 0 $out = File.new 'out.txt', 'w' def lesson(file) $cpt = $cpt + 1 file.readlines.each do |l| $out.puts l l = prepare_line l tokenize(l).each do |t| unless $dict[t] $dict[t] = $cpt $out.puts " #{t}\n" end end end end Dir.new('etc/').each do |filename| f = File.new("etc/#{filename}") unless File.directory? f lesson f end end Here is part of my output : ?@???†?†?????????? ?...[snip very long hangul/hanzi mishmash]... ????????†? ???N2 : ?e?te?? (2) µ???µa (Note that the puts l part seems to work fine, at the end of the given output line.) Any idea what is wrong with my code ? (General comments about ruby idioms I could use are very welcome, I'm really a beginner.)

    Read the article

  • can javascript process binary data?

    - by Johnny
    admit me describe my questions in situation-oriented way: assume IE is still the dominate web browser(the firefox have document for binary processing): the XMLHttpRequest.responseText or XMLHttpRequest.responseXML in ie desire txt or xml/xhtml/html,but what about the server response the xmlHttprequest whith MIME TYPE application/octet ? would the response string all little than 256 ?(every char of that string < 256), thanks very much for a straight answer, i have no webserver env,so i don't know how to test it out. because use txt or xml have a issue of character set encode, and i don't know how to process #[[[CDDATA node of one encoded xml(ex : utf-8,ascii,gb18030) with javascript, when i getNodeText, does the docObj return me byte or decoded char ? if it was decoded char which according to the header indicated charSet in the httpresponse , it would be all wrong. to avoid mess up with charSet ,i would like the server to response octet data and force strings data to be encoded as utf-8 but another charSet in the binary format. if the response is octal, so i guess the browser would not try to decode the response"txt" does this weird? or miss understanding the fundamental things? EDIT: I believe the question is asking this: Can Javascript safely process strings that aren't encoded in Unicode? What are the problems with trying to do so? EDIT: no no no , i means if http-header: content-type is "application/octet" , would the ie try to decoded it as (16bits Unicode | ie local setting charset ) when i get XMLHttpRequestobj.responseText use javascript ? or it(ie) just wrap every single byte of the response body as a javascript string, then every char in that string little than or equal 256 (char<=256), am i talking Mars language? sadly, if i were Marsizen,i would come as tourist without fuzzy questions. however i am in a country which share at least one property with Mars : RED

    Read the article

  • How can I create a "dynamic" WHERE clause?

    - by TheChange
    Hello there, First: Thanks! I finished my other project and the big surprise: now everything works as it should :-) Thanks to some helpful thinkers of SO! So here I go with the next project. I'd like to get something like this: SELECT * FROM tablename WHERE field1=content AND field2=content2 ... As you noticed this can be a very long where-clause. tablename is a static property which does not change. field1, field2 , ... (!) and the contents can change. So I need an option to build up a SQL statement in PL/SQL within a recursive function. I dont really know what to search for, so I ask here for links or even a word to search for.. Please dont start to argue about wether the recursive function is really needed or what its disadvanteges - this is not in question ;-) If you could help me to create something like an SQL-String which will later be able to do a successful SELECT this would be very nice! Iam able to go through the recursive function and make a longer string each time, but I cannot make an SQL statement from it.. Oh, one additional thing: I get the fields and contents by a xmlType (xmldom.domdocument etc) I can get the field and the content for example in a clob from the xmltype

    Read the article

  • How to allow click-through and a cursor in a background app while not taking the active appearance a

    - by Peter Hosey
    Here are my goals: My application displays an overlay window above all applications' window. The user can draw in the overlay window. The mouse cursor changes to a specific cursor while in the overlay window. The application that has the active appearance before summoning the overlay window still has it while the overlay window is up and usable. The user does not need to click on the overlay window to activate it before they can draw. Drawing in the window does not steal the active appearance away from the application that has it. With LSUIElement, I get #1, #2, #3, and #5. With LSBackgroundOnly, I get #1, #2, #4, and #6. How can I satisify all of these goals without installing an event tap and processing the mouse events myself? Things I've tried: [NSApp preventWindowOrdering] in mouseDown: [NSApp activateIgnoringOtherApps:YES] in applicationWillFinishLaunching: [myWindow orderFront:nil] in applicationWillFinishLaunching: [myWindow makeKeyAndOrderFront:nil] in applicationWillFinishLaunching: [myWindow orderFrontRegardless] in applicationWillFinishLaunching: [myWindow makeMainWindow] in applicationWillFinishLaunching: (this caused failure of point 4 even with LSBackgroundOnly) SetThemeCursor in applicationWillFinishLaunching: (With LSUIElement) Implementing canBecomeMainWindow in my NSPanel subclass to return NO Except where otherwise noted, none of these made any difference. So, with LSUIElement, goals #4 and #6 remain; with LSBackgroundOnly, goals #3 and #5 remain. Any suggestions?

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • How to properly set relationships in Core Data when using setValue and data already exists

    - by ern
    Let's say I have two objects: Articles and Categories. For the sake of this example all relevant categories have already been added to the data store. When looping through data that holds edits for articles, there is category relationship information that needs to be saved. I was planning on using the -setValue method in the Article class in order to set the relationships like so: - (void)setValue:(id)value forUndefinedKey:(NSString *)key { if([key isEqualToString:@"categories"]){ NSLog(@"trying to set categories..."); } } The problem is that value isn't a Category, it is just a string (or array of strings) holding the title of a category. I could certainly do a lookup within this method for each category and assign it, but that seems inefficient when processing a whole bunch of articles at once. Another option is to populate an array of all possible categories and just filter, but my question is where to store that array? Should it be a class method on Article? Is there a way to pass in additional data to the -setValue method? Is there another, better option for setting the relationship I'm not thinking of? Thanks for your help.

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • ~1 second TcpListener Pending()/AcceptTcpClient() lag

    - by cpf
    Probably just watch this video: http://screencast.com/t/OWE1OWVkO As you see, the delay between a connection being initiated (via telnet or firefox) and my program first getting word of it. Here's the code that waits for the connection public IDLServer(System.Net.IPAddress addr,int port) { Listener = new TcpListener(addr, port); Listener.Server.NoDelay = true;//I added this just for testing, it has no impact Listener.Start(); ConnectionThread = new Thread(ConnectionListener); ConnectionThread.Start(); } private void ConnectionListener() { while (Running) { while (Listener.Pending() == false) { System.Threading.Thread.Sleep(1); }//this is the part with the lag Console.WriteLine("Client available");//from this point on everything runs perfectly fast TcpClient cl = Listener.AcceptTcpClient(); Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler)); proct.Start(cl); } } (I was having some trouble getting the code into a code block) I've tried a couple different things, could it be I'm using TcpClient/Listener instead of a raw Socket object? It's not a mandatory TCP overhead I know, and I've tried running everything in the same thread, etc.

    Read the article

  • Pump Messages During Long Operations + C# (it is urgent)

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? It is very urgent. Thanks

    Read the article

  • How do I use a custom cookie session serializer in Rack?

    - by Damien Wilson
    Hello SO. I'm currently integrating Warden into a new Rack application I'm building. I'd like to implement a recent patch to Rack that allows me to specify how sessions are serialized; specifically, I'd like to use Rack::Session::Cookie::Identity as the session processor. Unfortunately, the documentation is a little unclear as to what syntax I should use to configure Rack::Session::Cookie in my rackup file, can anyone here tell me what I'm doing wrong? config.ru require 'my_sinatra_app' app = self use Rack::Session::Cookie.new(app, Rack::Session::Cookie::Identity.new), {:key => "auth_token"} use Warden::Manager do |warden| # Must come AFTER Rack::Session warden.default_strategies :password warden.failure_app Jelli::Auth.run! end run MySinatraApp error message from thin !! Unexpected error while processing request: undefined method `new' for #<Rack::Session::Cookie:0x00000110124128> PS: I'm using bundler to manage my gem dependencies and I've likewise included rack's master branch as the desired version. Update: As suggested in the comments below, I have read the documentation; sadly the suggested syntax in the docs is not working. Update: Still no luck on my end; offering up a bounty to whoever can help me figure this out.

    Read the article

  • XSLT: Add namespace to root element

    - by Ingrid
    I need to change namespaces in the root element as follows: input document: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <foo xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> desired output: <foo audience="external" xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9"> I was trying to do it as I copy over the whole document and before I give any other transformation instructions, but the following doesn't work: <xsl:template match="* | processing-instruction() | comment()"> <xsl:copy copy-namespaces="no"> <xsl:for-each select="."> <xsl:attribute name="audience" select="'external'"/> <xsl:namespace name="xlink" select="'http://www.w3.org/1999/xlink'"/> </xsl:for-each> <xsl:apply-templates/> <xsl:copy-of select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> Thanks for any advice!

    Read the article

  • Multiple websites, Single sign-on design

    - by Yannis
    Hi all, I have a question. A client I have been doing some work recently has a range of websites with different login mechanisms. He is looking to slowly migrate to a single sign-on mechanism for his websites (all written in asp.net mvc). I am looking at my options here, so here is a list of requirements: 1) It has to be secure (duh) 2) It needs to support extra user properties over and above the usual name, address stuff (such as money or credits for a user) 3) It has to provide a centralized user management web console for his convenience (I understand that this will be a small project on top of whatever design solution I choose to go for) 4) It has to integrate with the existing websites without re-engineering the whole product (I understand that this depends on the current product implementation). 5) It has to deal with emailing the user when he registers (in order for him to activate his account) 6) It has to deal with activating the user when he clicks the activate me link in the email (I understand that 5 and 6 require some form of email templating system to support different emails per application) I was thinking of creating a library working together with forms authentication that exposes whatever methods are required (e.g. login, logout, activate, etc. and a small restful service to implement activation from email, registration processing etc. Taking into account that loads of things have been left out to make this question brief and to the point, does this sound like a good design? But this looks like a very common problem so arent there any existing projects that I could use? Thanks for reading.

    Read the article

  • Is there a way to parse XML via SAX/DOM with line numbers available per node.

    - by Chris
    I already have written a DOM parser for a large XML document format that contains a number of items that can be used to automatically generate Java code. This is limited to small expressions that are then merged into a dynamically generated Java source file. So far - so good. Everything works. BUT - I wish to be able to embed the line number of the XML node where the Java code was included from (so that if the configuration contains uncompilable code, each method will have a pointer to the source XML document and the line number for ease of debugging). I don't require the line number at parse-time and I don't need to validate the XML Source Document and throw an error at a particular line number. I need to be able to access the line number for each node and attribute in my DOM or per SAX event. Any suggestions on how I might be able to achieve this? P.S. Also, I read the StAX has a method to obtain line number whilst parsing, but ideally I would like to achieve the same result with regular SAX/DOM processing in Java 4/5 rather than become a Java 6+ application or take on extra .jar files.

    Read the article

  • ASP.NET Login Control rejects users who exist

    - by rsteckly
    Hi, I'm having some trouble with the ASP.NET 2.0 Login Control. I've setup a database with the aspI.net regsql tool. I've checked the application name. It is set to "/". The application can access the SQL Server. In fact, when I go to retrieve the password, it will even send me the password. Despite this, the login control continues to reject logins. I added this to the web.config: <membership defaultProvider="AspNetSqlProvider"> <providers> <clear/> <add name="AspNetSqlProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> And I added the following to my connection strings: <remove name="LocalSqlServer" /> <add name="LocalSqlServer" connectionString="Data Source=IDC-4\EXCALIBUR;Initial Catalog=allied_nr;Integrated Security=True;Asynchronous Processing=True"/> (Note the "remove name" is to get rid of the default connection string in the App_Data directory.) Why won't the login control authenticate users?

    Read the article

  • .NET Regex - Replace multiple characters at once without overwriting?

    - by Everaldo Aguiar
    I'm implementing a c# program that should automatize a Mono-alphabetic substitution cipher. The functionality i'm working on at the moment is the simplest one: The user will provide a plain text and a cipher alphabet, for example: Plain text(input): THIS IS A TEST Cipher alphabet: A - Y, H - Z, I - K, S - L, E - J, T - Q Cipher Text(output): QZKL KL QJLQ I thought of using regular expressions since I've been programming in perl for a while, but I'm encountering some problems on c#. First I would like to know if someone would have a suggestion for a regular expression that would replace all occurrence of each letter by its corresponding cipher letter (provided by user) at once and without overwriting anything. Example: In this case, user provides plaintext "TEST", and on his cipher alphabet, he wishes to have all his T's replaced with E's, E's replaced with Y and S replaced with J. My first thought was to substitute each occurrence of a letter with an individual character and then replace that character by the cipherletter corresponding to the plaintext letter provided. Using the same example word "TEST", the steps taken by the program to provide an answer would be: 1 - replace T's with (lets say) @ 2 - replace E's with # 3 - replace S's with & 4 - Replace @ with E, # with Y, & with j 5 - Output = EYJE This solution doesn't seem to work for large texts. I would like to know if anyone can think of a single regular expression that would allow me to replace each letter in a given text by its corresponding letter in a 26-letter cipher alphabet without the need of splitting the task in an intermediate step as I mentioned. If it helps visualize the process, this is a print screen of my GUI for the program:

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • Reporting Services - can't group by a column called "LanguageId"

    - by marc_s
    Folks, I have a really odd behavior here: I have a SQL Server 2008 Reporting Services report which gets grouped and sorted dynamically. One of the column in my data set which I display is called LanguageId and I was trying to get a grouping going by this LanguageId field. I checked, double-checked and triple-checked the data being returned - it does contain my expected values for LanguageId and everything seems fine and dandy. It just never worked - I didn't get the expected groups, I got things like a specific node actually changing its display value from one ID to another when expanding its subitems, and other really whacky stuff. I discovered that grouping and sorting by LanguageCaption works just fine. It also started working fine after I renamed LanguageId to MyLanguageId. So where on earth is this documented that LanguageId appears to be a system variable / reserved word / keyword of some sort in SQL Server Reporting Services that must be avoided at all costs?? I can't seem to find anything on that topic - even Mr. Google and Mrs. Bing came up empty so far....

    Read the article

  • Does cout need to be terminated with a semicolon ?

    - by Philippe Harewood
    I am reading Bjarne Stroustrup's Programming : Principles and Practice Using C++ In the drill section for Chapter 2 it talks about various ways to look at typing errors when compiling the hello_world program #include "std_lib_facilities.h" int main() //C++ programs start by executing the function main { cout << "Hello, World!\n", // output "Hello, World!" keep_window_open(); // wait for a character to be entered return 0; } In particular this section asks: Think of at least five more errors you might have made typing in your program (e.g. forget keep_window_open(), leave the Caps Lock key on while typing a word, or type a comma instead of a semicolon) and try each to see what happens when you try to compile and run those versions. For the cout line, you can see that there is a comma instead of a semicolon. This compiles and runs (for me). Is it making an assumption ( like in the javascript question: Why use semicolon? ) that the statement has been terminated ? Because when I try for keep_terminal_open(); the compiler informs me of the semicolon exclusion.

    Read the article

  • Accept templated parameter of stl_container_type<string>::iterator

    - by Rodion Ingles
    I have a function where I have a container which holds strings (eg vector<string>, set<string>, list<string>) and, given a start iterator and an end iterator, go through the iterator range processing the strings. Currently the function is declared like this: template< typename ContainerIter> void ProcessStrings(ContainerIter begin, ContainerIter end); Now this will accept any type which conforms to the implicit interface of implementing operator*, prefix operator++ and whatever other calls are in the function body. What I really want to do is have a definition like the one below which explicitly restricts the amount of input (pseudocode warning): template< typename Container<string>::iterator> void ProcessStrings(Container<string>::iterator begin, Container<string>::iterator end); so that I can use it as such: vector<string> str_vec; list<string> str_list; set<SomeOtherClass> so_set; ProcessStrings(str_vec.begin(), str_vec.end()); // OK ProcessStrings(str_list.begin(), str_list.end()); //OK ProcessStrings(so_set.begin(), so_set.end()); // Error Essentially, what I am trying to do is restrict the function specification to make it obvious to a user of the function what it accepts and if the code fails to compile they get a message that they are using the wrong parameter types rather than something in the function body that XXX function could not be found for XXX class.

    Read the article

  • Wrapping unmanaged C++ with C++/CLI - a proper approach.

    - by Jamie
    Hi there, as stated in the title, I want to have my old C++ library working in managed .NET. I think of two possibilities: 1) I might try to compile the library with /clr and try "It Just Works" approach. 2) I might write a managed wrapper to the unmanaged library. First of all, I want to have my library working FAST, as it was in unmanaged environment. Thus, I am not sure if the first approach will not cause a large decrease in performance. However, it seems to be faster to implement (not a right word :-)) (assuming it will work for me). On the other hand, I think of some problems that might appear while writing a wrapper (e.g. how to wrap some STL collection (vector for instance)?) I think of writing a wrapper residing in the same project as the unmanaged C++ resides - is that a reasonable approach (e.g. MyUnmanagedClass and MyManagedClass in the same project, the second wrapping the other)? What would you suggest in that problem? Which solution is going to give me better performance of the resulting code? Thank you in advance for any suggestions and clues! Cheers

    Read the article

  • Assign sage variable values into R objects via sagetex and Sweave

    - by sheed03
    I am writing a short Sweave document that outputs into a Beamer presentation, in which I am using the sagetex package to solve an equation for two parameters in the beta binomial distribution, and I need to assign the parameter values into the R session so I can do additional processing on those values. The following code excerpt shows how I am interacting with sage: <<echo=false,results=hide>>= mean.raw <- c(5, 3.5, 2) theta <- 0.5 var.raw <- mean.raw + ((mean.raw^2)/theta) @ \begin{frame}[fragile] \frametitle{Test of Sage 2} \begin{sagesilent} var('a1, b1, a2, b2, a3, b3') eqn1 = [1000*a1/(a1+b1)==\Sexpr{mean.raw[1]}, ((1000*a1*b1)*(1000+a1+b1))/((a1+b1)^2*(a1+b1+1))==\Sexpr{var.raw[1]}] eqn2 = [1000*a2/(a2+b2)==\Sexpr{mean.raw[2]}, ((1000*a2*b2)*(1000+a2+b2))/((a2+b2)^2*(a2+b2+1))==\Sexpr{var.raw[2]}] eqn3 = [1000*a3/(a3+b3)==\Sexpr{mean.raw[3]}, ((1000*a3*b3)*(1000+a3+b3))/((a3+b3)^2*(a3+b3+1))==\Sexpr{var.raw[3]}] s1 = solve(eqn1, a1,b1) s2 = solve(eqn2, a2,b2) s3 = solve(eqn3, a3,b3) \end{sagesilent} Solutions of Beta Binomial Parameters: \begin{itemize} \item $\sage{s1[0]}$ \item $\sage{s2[0]}$ \item $\sage{s3[0]}$ \end{itemize} \end{frame} Everything compiles just fine, and in that slide I am able to see the solutions to the three equations respective parameters in that itemized list (for example the first item in the itemized list from that beamer slide is outputted as [a1=(328/667), b1=(65272/667)] (I am not able to post an image of the beamer slide but I hope you get the idea). I would like to save the parameter values a1,b1,a2,b2,a3,b3 into R objects so that I can use them in simulations. I cannot find any documentation in the sagetex package on how to save output from sage commands into variables for use with other programs (in this case R). Any suggestions on how to get these values into R?

    Read the article

  • Extract the vb code associated with a macro attached to an Action button in PowerPoint

    - by Patricker
    I have about 25 PowerPoint presentations, each with at least 45 slides. On each slide is a question with four possible answers and a help button which provides a hint relevant to the question. Each of the answers and the help button is a PowerPoint Action button that launches a macro. I am attempting to migrate all the questions/answers/hints into a SQL Database. I've worked with Office.Interop before when working with Excel and Word and I have plenty of SQL DB experience, so I don't foresee any issues with actually extracting the text portion of the question and answer and putting it into the db. What I have no idea how to do is given an object on a slide - get the action button info - Get the macro name - and finally get the macro's vb code. From there I can figure out how to parse out which is the correct answer and what the text of the hint is. Any help/ideas would be greatly appreciated. The guy I'm doing this for said he is willing to do it by hand... but that would take a while.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >