Search Results

Search found 6 results on 1 pages for 'taspeotis'.

Page 1/1 | 1 

  • How to write this in different regex flavours

    - by taspeotis
    I have the following data: a b c d FROM:<uniquepattern1> e f g h TO:<uniquepattern2> i j k l FROM:<uniquepattern1> m n o p TO:<uniquepattern3> q r s t FROM:<uniquepattern4> u v w x TO:<uniquepattern5> I would like a regex query that can find the contents of TO: when FROM:<uniquepattern1> is encountered, so the results would be uniquepattern2 and uniquepattern3. I am hopeless with regex, I would appreciate any pointers on how to write this (lookahead parameters?) and any differences between regex on different platforms (eg the C# .NET Regex versus Grep vs Perl) that might be relevant here. Thank you.

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Benefits of "Don't Fragment" on TCP Packets?

    - by taspeotis
    One of our customers is having trouble submitting data from our application (on their PC) to a server (different geographical location). When sending packets under 1100 bytes everything works fine, but above this we see TCP retransmitting the packet every few seconds and getting no response. The packets we are using for testing are about 1400 bytes (but less than 1472). I can send an ICMP ping to www.google.com that is 1472 bytes and get a response (so it's not their router/first few hops). I found that our application sets the DF flag for these packets, and I believe a router along the way to the server has an MTU less than/equal to 1100 and dropping the packet. This affects 1 client in 5000, but since everybody's routes will be different this is expected. The data is a SOAP envelope and we expect a SOAP response back. I can't justify WHY we do it, the code to do this was written by a previous developer. So... Are there are benefits OR justification to setting the DF flag on TCP packets for application data? I can think of reasons it is needed for network diagnostics applications but not in our situation (we want the data to get to the endpoint, fragmented or not). One of our sysadmins said that it might have something to do with us using SSL, but as far as I know SSL is like a stream and regardless of fragmentation, as long as the stream is rebuilt at the end, there's no problem. If there's no good justification I will be changing the behaviour of our application. Thanks in advance.

    Read the article

  • Radeon HD4850 serious issues when using DirectX 10

    - by ricsmania
    Hello, I have a problem with my video card. Whenever I run a DirectX 10 game, it works for a few seconds (10 or so) and then starts displaying nothing but big polygons. I have tested this with Crysis and Resident Evil 5, both have the same problems. The same games running under DirectX 9 work fine, except for some small black squares once in a while. I have the following specs: Asus P7P55D LE Intel Core i5 750 Sapphire Radeon HD4850 1GB 2x2GB Patriot Viper II Sector 5, DDR3 1600 MHz OCZ Stealth X Stream 500SXS 500W At first I thought it could be the video card overheating (it has stock cooling), but the game crashes even when it's running at 50 degrees C, and it's never been higher than 70. I also thought it could be the PSU, but as far as I know 500W is enough for this computer, especially because I haven't overclocked anything. My OS is Windows 7 X64 and I am using Catalyst 10.10, but I have also tried many older versions with no success. I don't think there is a problem with the card itself, or else it wouldn't run DirectX 9 games I believe. I have spent many hours searching for a solution but I couldn't, so any help is appreciated. Thank you. EDIT: I did some further investigation about the problem, and it seems taspeotis was right, it might be related to memory. I slightly underclocked the memory from 993 to 965 MHz and the problem went away completely. Both the black squares using DirectX 9 and the weird polygons using DirectX 10. I was using RE DirectX 10 Benchmark, as it consistently crashed around the same point, and now I can play the full benchmark with no artifacts at all. Unfortunately, the underclock has an obvious hit in performance. Although it's not critical, it's definitely noticeable. So, if the video memory test software showed no erros, but the card needs an underclock to work, what might be the problem? Temperature? Voltage? By the way, I couldn't find what the default voltage for this card is. And what is a good software to try and increase it? I tried Ati Tray Tools but it has a bug that increases the clock speed dramatically whenever I change something in the Overclock tab, so I'm afraid it might fry my card. Worst case scenario, if I don't find I solution I will try to slightly increase the GPU clock to compensate for the memory clock. Thank you again.

    Read the article

1