Search Results

Search found 21759 results on 871 pages for 'int'.

Page 229/871 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • Web service performing asynchronous call

    - by kornelijepetak
    I have a webservice method FetchNumber() that fetches a number from a database and then returns it to the caller. But just before it returns the number to the caller, it needs to send this number to another service so instantiates and runs the BackgroundWorker whose job is to send this number to another service. public class FetchingNumberService : System.Web.Services.WebService { [WebMethod] public int FetchNumber() { int value = Database.GetNumber(); AsyncUpdateNumber async = new AsyncUpdateNumber(value); return value; } } public class AsyncUpdateNumber { public AsyncUpdateNumber(int number) { sendingNumber = number; worker = new BackgroundWorker(); worker.DoWork += asynchronousCall; worker.RunWorkerAsync(); } private void asynchronousCall(object sender, DoWorkEventArgs e) { // Sending a number to a service (which is Synchronous) here } private int sendingNumber; private BackgroundWorker worker; } I don't want to block the web service (FetchNumber()) while sending this number to another service, because it can take a long time and the caller does not care about sending the number to another service. Caller expects this to return as soon as possible. FetchNumber() makes the background worker and runs it, then finishes (while worker is running in the background thread). I don't need any progress report or return value from the background worker. It's more of a fire-and-forget concept. My question is this. Since the web service object is instantiated per method call, what happens when the called method (FetchNumber() in this case) is finished, while the background worker it instatiated and ran is still running? What happens to the background thread? When does GC collect the service object? Does this prevent the background thread from executing correctly to the end? Are there any other side-effects on the background thread? Thanks for any input.

    Read the article

  • Boost Date_Time problem compiling a simple program

    - by Andry
    Hello! I'm writing a very stupid program using Boost Date_Time library. int main(int srgc, char** argv) { using namespace boost::posix_time; date d(2002,Feb,1); //an arbitrary date ptime t1(d, hours(5)+nanosec(100)); //date + time of day offset ptime t2 = t1 - minutes(4)+seconds(2); ptime now = second_clock::local_time(); //use the clock date today = now.date(); //Get the date part out of the time } Well I cannot compile it, compiler does not recognize a type... Well I used many features of Boost libs like serialization and more... I correctly built them and, looking in my /usr/local/lib folder I can see that libboost_date_time.so is there (a good sign which means I was able to build that library) When I compile I write the following: g++ -lboost_date_time main.cpp But the errors it showed me when I specify the lib are the same of those ones where I do not specify any lib. What is this? Anyone knows? The error is main.cpp: In function ‘int main(int, char**)’: main.cpp:9: error: ‘date’ was not declared in this scope main.cpp:9: error: expected ‘;’ before ‘d’ main.cpp:10: error: ‘d’ was not declared in this scope main.cpp:10: error: ‘nanosec’ was not declared in this scope main.cpp:13: error: expected ‘;’ before ‘today’

    Read the article

  • How to embed XBase expressions in an Xtext DSL

    - by Marcus Mathioudakis
    I am writing a simple little DSL for specifying constraints on messages, and Have been trying without success for a while to embed XBase expressions into the language. The Grammar looks like this: grammar org.xtext.businessrules.BusinessRules with org.eclipse.xtext.xbase.Xbase //import "http://www.eclipse.org/xtext/xbase/Xbase" as xbase import "http://www.eclipse.org/xtext/common/JavaVMTypes" as jvmTypes generate businessRules "http://www.xtext.org/businessrules/BusinessRules" Start: rules+=Constraint*; Constraint: {Constraint} 'FOR' 'PAYLOAD' payload=PAYLOAD 'ELEMENT' element=ID 'CONSTRAINED BY' constraint=XExpression; PAYLOAD: "SimulationSessionEvents" |"stacons" |"any" ; Range: 'above' min=INT ('below' max=INT)? |'below' max=INT ('above' min=INT)? ; When trying to parse a file such as: FOR PAYLOAD SimulationSessionEvents ELEMENT matrix CONSTRAINED BY ... I can't get it to work for ... = any kind of Arithmetic expression, although it works for ...= loop or if expression, or even just a number. As soon as I do something like '-5' or '4-5' it says Couldn't resolve reference to JvmIdentifiableElement '-', even though the Xbase.xtext Grammar looks like it allows these expressions. I don't think I'm missing any Jars, as it doesn't complain when I run the mwe workflow, but only when trying to parse the input file. Any help would be much appreciated.

    Read the article

  • Creating a [materialised]view from generic data in Oracle/Mysql

    - by Andrew White
    I have a generic datamodel with 3 tables CREATE TABLE Properties ( propertyId int(11) NOT NULL AUTO_INCREMENT, name varchar(80) NOT NULL ) CREATE TABLE Customers ( customerId int(11) NOT NULL AUTO_INCREMENT, customerName varchar(80) NOT NULL ) CREATE TABLE PropertyValues ( propertyId int(11) NOT NULL, customerId int(11) NOT NULL, value varchar(80) NOT NULL ) INSERT INTO Properties VALUES (1, 'Age'); INSERT INTO Properties VALUES (2, 'Weight'); INSERT INTO Customers VALUES (1, 'Bob'); INSERT INTO Customers VALUES (2, 'Tom'); INSERT INTO PropertyValues VALUES (1, 1, '34'); INSERT INTO PropertyValues VALUES (2, 1, '80KG'); INSERT INTO PropertyValues VALUES (1, 2, '24'); INSERT INTO PropertyValues VALUES (2, 2, '53KG'); What I would like to do is create a view that has as columns all the ROWS in Properties and has as rows the entries in Customers. The column values are populated from PropertyValues. e.g. customerId Age Weight 1 34 80KG 2 24 53KG I'm thinking I need a stored procedure to do this and perhaps a materialised view (the entries in the table "Properties" change rarely). Any tips?

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • process semaphores linux - wait

    - by coubeatczech
    Hi, I try to code a simple program that starts and waits on the system semaphore until it gets terminated by signal. union semun { int val; struct semid_ds *buf; unsigned short int *array; struct seminfo *__buf; }; int main(){ int semaphores = semget(IPC_PRIVATE,1,IPC_CREAT | 0666); union semun arg; arg.val = 0; semctl(semaphores,0,SETVAL,arg); struct sembuf operations[1]; operations[0].sem_num = 0; operations[0].sem_op = -1; operations[0].sem_flg = 0; semop(semaphores,operations,1); fprintf(stderr,"Why?\n"); return 0; } I expect, that everytime this program gets executed, nothing actually happens and it waits on the semaphore, but everytime it goes through the semaphore and writes Why?. Why?

    Read the article

  • Linker error: wants C++ virtual base class destructor

    - by jdmuys
    Hi, I have a link error where the linker complains that my concrete class's destructor is calling its abstract superclass destructor, the code of which is missing. This is using GCC 4.2 on Mac OS X from XCode. I saw http://stackoverflow.com/questions/307352/g-undefined-reference-to-typeinfo but it's not quite the same thing. Here is the linker error message: Undefined symbols: "ConnectionPool::~ConnectionPool()", referenced from: AlwaysConnectedConnectionZPool::~AlwaysConnectedConnectionZPool()in RKConnector.o ld: symbol(s) not found collect2: ld returned 1 exit status Here is the abstract base class declaration: class ConnectionPool { public: static ConnectionPool* newPool(std::string h, short p, std::string u, std::string pw, std::string b); virtual ~ConnectionPool() =0; virtual int keepAlive() =0; virtual int disconnect() =0; virtual sql::Connection * getConnection(char *compression_scheme = NULL) =0; virtual void releaseConnection(sql::Connection * theConnection) =0; }; Here is the concrete class declaration: class AlwaysConnectedConnectionZPool: public ConnectionPool { protected: <snip data members> public: AlwaysConnectedConnectionZPool(std::string h, short p, std::string u, std::string pw, std::string b); virtual ~AlwaysConnectedConnectionZPool(); virtual int keepAlive(); // will make sure the connection doesn't time out. Call regularly virtual int disconnect(); // disconnects/destroys all connections. virtual sql::Connection * getConnection(char *compression_scheme = NULL); virtual void releaseConnection(sql::Connection * theConnection); }; Needless to say, all those members are implemented. Here is the destructor: AlwaysConnectedConnectionZPool::~AlwaysConnectedConnectionZPool() { printf("AlwaysConnectedConnectionZPool destructor call"); // nothing to destruct in fact } and also maybe the factory routine: ConnectionPool* ConnectionPool::newPool(std::string h, short p, std::string u, std::string pw, std::string b) { return new AlwaysConnectedConnectionZPool(h, p, u, pw, b); } I can fix this by artificially making my abstract base class concrete. But I'd rather do something better. Any idea? Thanks

    Read the article

  • c# string interning

    - by CodingThunder
    I am trying to understand string interning and why is doesn't seem to work in my example. The point of the example is to show Example 1 uses less (a lot less memory) as it should only have 10 strings in memory. However, in the code below both example use roughly the same amount of memory (virtual size and working set). Please advice why example 1 isn't using a lot less memory? Thanks Example 1: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(string.Intern(k.ToString())); } } Console.WriteLine("intern Done"); Console.ReadLine(); Example 2: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(k.ToString()); } } Console.WriteLine("intern Done"); Console.ReadLine();

    Read the article

  • Java HashSet using a specified method

    - by threenplusone
    I have a basic class 'HistoryItem' like so: public class HistoryItem private Date startDate; private Date endDate; private Info info; private String details; @Override public int hashCode() { int hash = (startDate == null ? 0 : startDate.hashCode()); hash = hash * 31 + (endDate == null ? 0 : endDate.hashCode()); return hash; } } I am currently using a HashSet to remove duplicates from an ArrayList on the startDate & endDate fields, which is working correctly. However I also need to remove duplicates on different fields (info & details). My question is this. Is there a way to specify a different method which HashSet will use in place of hashCode()? Something like this: public int hashCode_2() { int hash = (info == null ? 0 : info.hashCode()); hash = hash * 31 + (details == null ? 0 : details.hashCode()); return hash; } Set<HistoryItem> removeDups = new HashSet<HistoryItem>(); removeDups.setHashMethod(hashCode_2); Or is there another way that I should be doing this?

    Read the article

  • Array of templated structs

    - by Jakub Mertlik
    I have structs templated by int derived from a Base struct. struct Base { int i; double d; }; template< int N > struct Derv : base { static const int mN = N; }; I need to make an array of Derv< N where N can vary for each struct in that array. I know C/C++ does not allow arrays of objects of different types, but is there a way around this? I was thinking of separating the type information somehow (hints like pointers to Base struct or usage of union spring to my mind, but with all of these I don't know how to store the type information of each array element for usage DURING COMPILE TIME). As you can see, the memory pattern of each Derv< N is the same. I need to access the type of each array element for template specialization later in my code. The general aim of this all is to have a compile-time dispatch mechanism without the need to do a runtime "type switch" somewhere in the code. Thank you.

    Read the article

  • asses resouces from another apk for widget(like theme apk)

    - by Diljeet
    asses resouces from another apk for widget(like theme apk) Hello i made a theme with drawables and strings, i can access the theme's resources by following commands String packagename="com....."; Resources res = context.getPackageManager().getResourcesForApplication(packagename); int strid = res.getIdentifier(packagename+":string/"+name, "string", packagename); int drawableid = res.getIdentifier(packagename+":drawable/"+name, "drawable", packagename); String name = res.getString(resid); Bitmap bmp = BitmapFactory.decodeResource(res, resid); //or Drawable dr=res.getDrawable(resid); I can get the drawables, but i don't know how to set the drawables in xml to a widget. I only can set setImageViewResource(int viewid, int resId); So i am asking that how to access xml resources from another apk and set widget drawables from it. I want to access a resource like this from another theme apk, but i can only access the drawable(so can't use it for widgets) <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_pressed="true" android:drawable="@drawable/wplayh" /> <item android:drawable="@drawable/wplay" /> </selector>

    Read the article

  • Help in J2ME for creating image and parse it

    - by HAMED
    Can anyone tell me how i can parse a png image into many png images in J2ME??? for examole:I just wnat to have a source image 150*150 pixel and parse it to 10 image with 15*15 pixel. I write an elemantary code that have exeption. This is my code: public class HelloMIDlet extends MIDlet implements CommandListener { private boolean midletPaused = false; private Command exitCommand; private Form form; private StringItem stringItem; Image im , im2; Form form1 = null; public HelloMIDlet() { try { // create source image im = Image.createImage("/image1.JPG"); int height = im.getHeight() ; int width = im.getWidth() ; int x = 0 ; int y = 0 ; while ( y < height ){ while ( x < width ){ // create 15*15 pixel of source image im2 = im.createImage(im, x, y, 15, 15, Sprite.TRANS_NONE) ; x += 15 ; } y += 15 ; x = 0 ; } } catch (IOException ex) { ex.printStackTrace(); } } please help me to make it right....It's emergency! Thanks a lot guys...

    Read the article

  • Unit Testing Private Method in Resource Managing Class (C++)

    - by BillyONeal
    I previously asked this question under another name but deleted it because I didn't explain it very well. Let's say I have a class which manages a file. Let's say that this class treats the file as having a specific file format, and contains methods to perform operations on this file: class Foo { std::wstring fileName_; public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it //Long method to figure out what checksum it is. //Return the checksum. } }; Let's say I'd like to be able to unit test the part of this class that calculates the checksum. Unit testing the parts of the class that load in the file and such is impractical, because to test every part of the getChecksum() method I might need to construct 40 or 50 files! Now lets say I'd like to reuse the checksum method elsewhere in the class. I extract the method so that it now looks like this: class Foo { std::wstring fileName_; static int calculateChecksum(const std::vector<unsigned char> &fileBytes) { //Long method to figure out what checksum it is. } public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it return calculateChecksum( something ); } void modifyThisFileSomehow() { //Perform modification int newChecksum = calculateChecksum( something ); //Apply the newChecksum to the file } }; Now I'd like to unit test the calculateChecksum() method because it's easy to test and complicated, and I don't care about unit testing getChecksum() because it's simple and very difficult to test. But I can't test calculateChecksum() directly because it is private. Does anyone know of a solution to this problem?

    Read the article

  • How to call functions inside a C dll which take pointers as arguments from C#

    - by AndrejaKo
    Hi people, this is my first post here! I'm trying to make a windows forms program using C# which will use a precompiled C library. It will access a smart card and provide output from it. For the library, I have a .dll, .lib and .h and no source. In the .h file there are several structs defined. Most interesting functions of the .dll expect pointers to allocated structs as arguments. I've been calling functions inside the .dll like this: For example function EID_API int WINAPI EidStartup(int nApiVersion); would be called like this [DllImport("CelikApi.dll")]//the name of the .dll public static extern int EidStartup(int nApiVersion); Now my problem is that I can't find equivalent of C's pointers which point to dynamically allocated structures in memory in C#, so I don't know what to pass as argument to functions which take C pointers. I don't have much experience in C#, but to me its use looked as the easiest way of making the program I need. I tried with C++, but Visual Studio 2010 doesn't have IntelliSense for C++/CLR. If you can point me to something better, feel free to do so.

    Read the article

  • Recursive Maze in Java

    - by Api
    I have written a short Java code for solving a simple maze problem to go from S to G. I do not understand where the problem is going wrong. import java.util.Scanner; public class tester { static char [][] grid={ {'.','.'}, {'.','.'}, {'S','G'}, }; static int a=2; static int b=2; static boolean findpath(int x, int y) { if((x > grid.length-1) || (y > grid[0].length-1) || (x < 0 || y < 0)) { return false; } else if(x==a && y==b){ return true; } else if (findpath(x,y-1) == true){ return true; } else if (findpath(x+1,y) == true){ return true; } else if (findpath(x,y+1) == true) { return true; } else if (findpath(x-1,y) == true){ return true; } return false; } public static void main(String[] args){ boolean result=findpath(2,0); System.out.print(result); } } I am giving the starting position directly and goal is defined in a & b. Do help.

    Read the article

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • Stored Procedure Does Not Fire Last Command

    - by jp2code
    On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote. It is not throwing any errors, and it is returning the correct values after making the insert in the database. However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run. I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works. Is there something wrong with how I call it in my stored procedure? [dbo].[spUpdateRequest](@packetID int, @statusID int output, @empID int, @mtf nVarChar(50)) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @id int SET @id=-1 -- Insert statements for procedure here SELECT A.ID, PacketID, StatusID INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID) WHERE (PacketID=@packetID) AND (StatusID=@statusID) IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue SELECT ID, MTF INTO #req FROM Request WHERE PacketID=@packetID WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN SELECT TOP 1 @id=ID FROM #req INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp) VALUES (@id, @statusID, @empID, GETDATE()) IF ((@mtf IS NOT NULL) AND (0 < LEN(RTRIM(@mtf)))) BEGIN UPDATE Request SET MTF=@mtf WHERE ID=@id END DELETE #req WHERE ID=@id END DROP TABLE #req SELECT @id=@@IDENTITY, @statusID=StatusID FROM Action SELECT TOP 1 @statusID=ID FROM Status WHERE (@statusID<ID) AND (-1 < Sequence) EXEC spSendEventNotificationEmail @packetID, @statusID, 'http:\\cpweb:8100\NextStep.aspx' END ELSE BEGIN SET @statusID = -1 END DROP TABLE #act END Idea of how the data tables are connected:

    Read the article

  • Get index of nth occurrence of char in a string

    - by StickFigs
    I'm trying to make a function that returns the index of the Nth occurrence of a given char in a string. Here is my attempt: private int IndexOfNth(string str, char c, int n) { int index = str.IndexOf(c) + 1; if (index >= 0) { string temp = str.Substring(index, str.Length - index); for (int j = 1; j < n; j++) { index = temp.IndexOf(c) + 1; if (index < 0) { return -1; } temp = temp.Substring(index, temp.Length - index); } index = index + (str.Length); } return index; } This should find the first occurrence, chop off that front part of the string, find the first occurrence from the new substring, and on and on until it gets the index of the nth occurrence. However I failed to consider how the index of the final substring is going to be offset from the original actual index in the original string. How do I make this work? Also as a side question, if I want the char to be the tab character do I pass this function '\t' or what?

    Read the article

  • Setting the default stack size on Linux globally for the program

    - by wowus
    So I've noticed that the default stack size for threads on linux is 8MB (if I'm wrong, PLEASE correct me), and, incidentally, 1MB on Windows. This is quite bad for my application, as on a 4-core processor that means 64 MB is space is used JUST for threads! The worst part is, I'm never using more than 100kb of stack per thread (I abuse the heap a LOT ;)). My solution right now is to limit the stack size of threads. However, I have no idea how to do this portably. Just for context, I'm using Boost.Thread for my threading needs. I'm okay with a little bit of #ifdef hell, but I'd like to know how to do it easily first. Basically, I want something like this (where windows_* is linked on windows builds, and posix_* is linked under linux builds) // windows_stack_limiter.c int limit_stack_size() { // Windows impl. return 0; } // posix_stack_limiter.c int limit_stack_size() { // Linux impl. return 0; } // stack_limiter.cpp int limit_stack_size(); static volatile int placeholder = limit_stack_size(); How do I flesh out those functions? Or, alternatively, am I just doing this entirely wrong? Remember I have no control over the actual thread creation (no new params to CreateThread on Windows), as I'm using Boost.Thread.

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • SQL Query Math Gymnastics

    - by keruilin
    I have two tables of concern here: users and race_weeks. User has many race_weeks, and race_week belongs to User. Therefore, user_id is a fk in the race_weeks table. I need to perform some challenging math on fields in the race_weeks table in order to return users with the most all-time points. Here are the fields that we need to manipulate in the race_weeks table. races_won (int) races_lost (int) races_tied (int) points_won (int, pos or neg) recordable_type(varchar, Robots can race, but we're only concerned about type 'User') Just so that you fully understand the business logic at work here, over the course of a week a user can participate in many races. The race_week record represents the summary results of the user's races for that week. A user is considered active for the week if races_won, races_lost, or races_tied is greater than 0. Otherwise the user is inactive. So here's what we need to do in our query in order to return users with the most points won (actually net_points_won): Calculate each user's net_points_won (not a field in the DB). To calculate net_points, you take (1000 * count_of_active_weeks) - sum(points__won). (Why 1000? Just imagine that every week the user is spotted a 1000 points to compete and enter races. We want to factor-out what we spot the user because the user could enter only one race for the week for 100 points, and be sitting on 900, which we would skew who actually EARNED the most points.) This one is a little convoluted, so let me know if I can clarify further.

    Read the article

  • How to figure out which key was pressed on a BlackBerry

    - by Skrud
    What I want: To know when the user has pressed the button that has the number '2' on it, for example. I don't care whether "Alt" or "Shift" has been pressed. The user has pressed a button, and I want to evaluate whether this button has '2' printed on it. Naturally, if I switch devices this key will change. On a Bold 9700/9500 this is the 'E' key. On a Pearl, this is the 'T'/'Y' key. I've managed to get this working in what appears to be a roundabout way, by looking up the keycode of the '2' character with the ALT button enabled and using Keypad.key() to get the actual button: // figure out which key the '2' is on: final int BUTTON_2_KEY = Keypad.key(KeypadUtil.getKeyCode('2', KeypadListener.STATUS_ALT, KeypadUtil.MODE_EN_LOCALE)); protected boolean keyDown(int keycode, int time) { int key = Keypad.key(keycode); if ( key == BUTTON_2_KEY ) { // do something return true; } return super.keyDown(keycode,time); } I can't help but wonder if there is a better way to do this. I've looked at the constants defined in KeypadListener and Keypad but I can't find any constants mapped to the actual buttons on the device. Would any more experienced BlackBerry devs care to lend a helping hand? Thanks!

    Read the article

  • Drawing an honeycomb with as3

    - by vitto
    Hi, I'm trying to create an honeycomb with as3 but I have some problem on cells positioning. I've already created the cells (not with code) and for cycled them to a funcion and send to it the parameters which what I thought was need (the honeycomb cell is allready on a sprite container in the center of the stage). to see the structure of the cycle and which parameters passes, please see the example below, the only thing i calculate in placeCell is the angle which I should obtain directly inside tha called function Note: the angle is reversed but it isn't important, and the color are useful in example only for visually divide cases. My for cycle calls placeCell and passes cell, current_case, counter (index) and the honeycomb cell_lv (cell level). I thought it was what i needed but I'm not skilled in geometry and trigonometry, so I don't know how to position cells correctly: function placeCell (cell:Sprite, current_case:int, counter:int, cell_lv:int):void { var margin:int = 2; var angle:Number = (360 / (cell_lv * 6)) * (current_case + counter); var radius:Number = (cell.width + margin) * cell_lv; cell.x = radius * Math.cos (angle); cell.y = radius * Math.sin (angle); trace ("LV " + cell_lv + " current_case " + current_case + " counter " + counter + " angle " + angle + " radius " + radius) } how can I do to solve it?

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >