Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 588/1118 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • Do I have to bind an UDP socket in my client program, to receive data? (I always get WASEINVAL)...

    - by Incubbus
    Hey There, I have the following problem: I am starting WSA, then I am creating a UDP socket (AF_INET, SOCK_DGRAM, IPPROTO_UDP) and try to recvfrom on this socket, but it always returns -1 and I get WSAEINVAL (10022)... I don´t know why?... When I bind the port, that does not happen... But it is very lame to bind the clients socket... (As far as I remember, i never had this problem before:/ )... Anyone knows why this happens?... (I am sending data to my server, which anwsers[ or at least, tries to^^])... Inc::STATS CConnection::_RecvData(sockaddr* addr, std::string &strData) { int ret, len, fromlen; //return code / length of the data / sizeof(sockaddr) char *buffer; //will hold the data char c; //recv length of the message fromlen = sizeof(sockaddr); ret = recvfrom(m_InSock, &c, 1, 0, addr, &fromlen); if(ret != 1) { #ifdef __MYDEBUG__ std::stringstream ss; ss << WSAGetLastError(); MessageBox(NULL, ss.str().c_str(), "", MB_ICONERROR | MB_OK); #endif return Inc::ERECV; }...

    Read the article

  • Get_user running at kernel mode returns error

    - by Fangkai Yang
    Hi, all, I have a problem with get_user() macro. What I did is as follows: I run the following program int main() { int a = 20; printf("address of a: %p", &a); sleep(200); return 0; } When the program runs, it outputs the address of a, say, 0xbff91914. Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ): The address is firstly sent as a string, and I cast them into pointer type. int * ptr = (int*)simple_strtol(buffer, NULL,16); printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected. int val = 0; int res; res= get_user(val, (int*) ptr); However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem.... Thank you!! -- Fangkai

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • UITableView with dynamic cell heights -- what do I need to do to fix scrolling down?

    - by Ian Terrell
    I am building a teensy tiny little Twitter client on the iPhone. Naturally, I'm displaying the tweets in a UITableView, and they are of course of varying lengths. I'm dynamically changing the height of the cell based on the text quite fine: - (CGFloat)heightForTweetCellWithString:(NSString *)text { CGFloat height = Buffer + [text sizeWithFont:Font constrainedToSize:Size lineBreakMode:LineBreakMode].height; return MAX(height, MinHeight); } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *text = // get tweet text for this indexpath return [self heightForTweetCellWithString:text]; } } I'm displaying the actual tweet cell using the algorithm in the PragProg book: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"TweetCell"; TweetCell *cell = (TweetCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [self createNewTweetCellFromNib]; } cell.tweet.text = // tweet text // set other labels, etc return cell; } When I boot up, all the tweets visible display just fine. However, when I scroll down, the tweets below are quite mussed up -- it appears that once a cell has scrolled off the screen, the cell height for the one above it gets resized to be larger than it should be, and obscures part of the cell below it. When the cell reaches the top of the view, it resets itself and renders properly. Scrolling up presents no difficulties. Here is a video that shows this in action: http://screencast.com/t/rqwD9tpdltd I've tried quite a bit already: resizing the cell's frame on creation, using different identifiers for cells with different heights (i.e. [NSString stringWithFormat:@"Identifier%d", rowHeight]), changing properties in Interface Builder... If there are additional code snippets I can post, please let me know. Thanks in advance for your help!

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • c# sending sms from computer

    - by I__
    i have this code: private SerialPort port = new SerialPort("COM1", 115200, Parity.None, 8, StopBits.One); Console.WriteLine("Incoming Data:"); port.WriteTimeout = 5000; port.ReadTimeout = 5000; // Attach a method to be called when there is data waiting in the port's buffer port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); // Begin communications port.Open(); #region PhoneSMSSetup port.Write("AT+CMGF=1\r\n"); Thread.Sleep(500); port.Write("AT+CNMI=2,2\r\n"); Thread.Sleep(500); port.Write("AT+CSCA=\"+4790002100\"\r\n"); Thread.Sleep(500); #endregion // Enter an application loop which keeps this thread alive Application.Run(); i got it from here: http://www.experts-exchange.com/Programming/Languages/C_Sharp/Q_22832563.html i have a new winforms empty application: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } } } can you please tell me: where exactly would i paste the code? how do i get the code to run? i am sending AT COMMANDS to my cell phone that is attached to the computer

    Read the article

  • Using the contents of an array to set individual pixels in a Quartz bitmap context

    - by Magic Bullet Dave
    I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view. It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient. I guess the other question is should I use OPENGL ES instead? Thoughts/best practice would be much appreciated. Regards Dave OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working: - (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context { long bitmapData[WIDTH * HEIGHT]; // Build bitmap int i, j, h; for (i = 0; i < WIDTH; i++) { for (j = 0; j < HEIGHT; j++) { h = frameBuffer01[i][j]; bitmapData[i * j] = h; } } // Blit the bitmap to the context CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL); CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault); CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef); CGImageRelease(imageRef); CGColorSpaceRelease(colorSpaceRef); CGDataProviderRelease(providerRef); }

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • Life Scope of Temporary Variable

    - by Yan Cheng CHEOK
    #include <cstdio> #include <string> void fun(const char* c) { printf("--> %s\n", c); } std::string get() { std::string str = "Hello World"; return str; } int main() { const char *cc = get().c_str(); // cc is not valid at this point. As it is pointing to // temporary string internal buffer, and the temporary string // has already been destroyed at this point. fun(cc); // But I am surprise this call will yield valid result. // It seems that the returned temporary string is valid within // scope (...) // What my understanding is, scope means {...} // Is this valid behavior guarantee by C++ standard? Or it depends // on your compiler vendor implementations? fun(get().c_str()); getchar(); } The output is : --> --> Hello World Hello, may I know the correct behavior is guarantee by C++ standard, or it depends on your compiler vendor implementations? I have tested this under VC2008 and VC6. Works fine for both.

    Read the article

  • C# Attribute XmlIgnore and XamlWriter class - XmlIgnore not working

    - by Horst Walter
    I have a class, containing a property Brush MyBrush marked as [XmlIgnore]. Nevertheless it is serialized in the stream causing trouble when trying to read via XamlReader. I did some tests, e.g. when changing the visibility (to internal) of the Property it is gone in the stream. Unfortunately I cannot do this in my particular scenario. Did anybody have the same issue and? Do you see any way to work around this? Remark: C# 4.0 as far I can tell This is a method from my Unit Test where I do test the XamlSerialization: // buffer to a StringBuilder StringBuilder sb = new StringBuilder(); XmlWriter writer = XmlWriter.Create(sb, settings); XamlDesignerSerializationManager manager = new XamlDesignerSerializationManager(writer) {XamlWriterMode = XamlWriterMode.Expression}; XamlWriter.Save(testObject, manager); xml = sb.ToString(); Assert.IsTrue(!String.IsNullOrEmpty(xml) && !String.IsNullOrEmpty(xml), "Xaml Serialization failed for " + testObject.GetType() + " no xml string available"); xml = sb.ToString(); MemoryStream ms = xml.StringToStream(); object root = XamlReader.Load(ms); Assert.IsTrue(root != null, "After reading from MemoryStream no result for Xaml Serialization"); In one of my classes I use the Property Brush. In the above code this Unit Tests fails because of a Brush object not serializable is the value. When I remove the Setter (as below, the Unit Test passes. Using the XmlWriter (basically same test as above) it works. In the StringBuffer sb I can see that Property Brush is serialized when the Setter is there and not when removed (most likely another check ignoring the Property because of no setter). Other Properties with [XmlIgnore] are ignored as intended. [XmlIgnore] public Brush MyBrush { get { ..... } // removed because of problem with Serialization // set { ... } }

    Read the article

  • ArrayList<Int> Collections.Sort and LineNumberReader Help How to

    - by user1819551
    I have a issue i can't get it to work now let going to the point a explain in the code thanks. This is my class: what I want to do is insert the Integers sort the list and buffer writer in a column with out coma. Now I getting this: [1110018, 1110032, 1110056, 1110059, 1110063, 1110085, 1110096, 1110123, 1110125, 1110185, 1110456, 1110459] I want like this: 111xxxxx 111xxxx xxxx....... I can't do it in single array, have to be in ArrayList. This is my collecting: list.addNumbers(numbers); list.display(); This is my writer: Is buffered coma.write("\n"+list.display()); coma.flush();<br/> Here is my class: public class IdCount {<br/> private ArrayList<Integer> properNumber = new ArrayList<>(); public void addNumbers(Integer numbers) { properNumber.add(numbers); Collections.sort(properNumber); } public String display() { //(I try .toString() Not work) return properNumber.toString(); } My second issue is LineNumberReader: This is my collecting and my writing: try { Reader input = new BufferedReader( new FileReader(inputFile)); try (Scanner in = new Scanner(input)) { while (in.hasNext()) { //(More Code) asp = new LineNumberReader(input); int rom = 0; while (asp.readLine()!=null){ rom++; } System.out.println(rom); coma.write(rom); This one will not write anything an my System Print give me only 12 0 in column.

    Read the article

  • Should I return an NSMutableString in a method that returns NSString

    - by Casey Marshall
    Ok, so I have a method that takes an NSString as input, does an operation on the contents of this string, and returns the processed string. So the declaration is: - (NSString *) processString: (NSString *) str; The question: should I just return the NSMutableString instance that I used as my "work" buffer, or should I create a new NSString around the mutable one, and return that? So should I do this: - (NSString *) processString: (NSString *) str { NSMutableString *work = [NSMutableString stringWithString: str]; // process 'work' return work; } Or this: - (NSString *) processString: (NSString *) str { NSMutableString *work = [NSMutableString stringWithString: str]; // process 'work' return [NSString stringWithString: work]; // or [work stringValue]? } The second one makes another copy of the string I'm returning, unless NSString does smart things like copy-on-modify. But the first one is returning something the caller could, in theory, go and modify later. I don't care if they do that, since the string is theirs. But are there valid reasons for preferring the latter form over the former? And, is either stringWithString or stringValue preferred over the other?

    Read the article

  • Enumerating pixel formats for adaptors and modes with OpenGL

    - by Robinson
    I'm trying to code an OpenGL path for my 3D engine. The D3D path enumerates all device adaptors, all modes (by mode I mean bit depth, dimensions, available windowed, and refresh rate) for each adaptor and then all pixel formats available for the given mode and adaptor, along side certain useful caps (shader version, filter types, etc.). So, I have broadly got the following protected functions in the class: // Enumerate all back/front buffer combinations. virtual void EnumerateBackFrontBufferCombinations(CComPtr<IDirect3D9>& d3d9); // Enumerate all depth/stencil formats. virtual void EnumerateDepthStencilFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all multi-sample formats. virtual void EnumerateMultiSampleTypes(CComPtr<IDirect3D9>& d3d9); // Enumerate all device formats, i.e. dynamic, static, render target, etc. virtual void EnumerateMapFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all capabilities. virtual void EnumerateCapabilities(CComPtr<IDirect3D9>& d3d9); The adaptors are enumerated with EnumDisplayDevices, the modes (resolutions and refresh rates) are enumerated with EnumDisplaySettings, so this can be done for either GL or D3D. The other functions I'm not so sure about with OpenGL. What are the equivalents to the IDirect3D9's CheckDeviceType, CheckDeviceFormat, CheckDeviceMultiSampleType, CheckDepthStencilMatch? I know I can use DescribePixelFormat, given a DC, but you kind-of need to have created the window before you can use a DC with it, but you can't create the window correctly until you know what formats you're going to use. Any tips you can give me? Thanks.

    Read the article

  • Adroid's DateFormat replacement - missing the format() with FieldPosition

    - by user331244
    Hi, I need to split a date string into pieces and I'm doing it using the public final StringBuffer format (Object object, StringBuffer buffer, FieldPosition field) from the java.text.DateFormat class. However, the implementation of this function is really slow, hence Android has an own implementation in android.text.format.DateFormat. BUT, in my case, I want to extract the different pieces of the date string (year, minute and so on). Since I need to be locale independent, I can not use SimpleDateFormat and custom strings. I do it as follows: Calendar c = ... // find out what field to extract int field = getField(); // Create a date string Field calendarField = DateFormat.Field.ofCalendarField(field); FieldPosition fieldPosition = new FieldPosition(calendarField); StringBuffer label = new StringBuffer(); label = getDateFormat().format(c.getTime(), label, fieldPosition); // Find the piece that we are looking for int beginIndex = fieldPosition.getBeginIndex(); int endIndex = fieldPosition.getEndIndex(); String asString = label.substring(beginIndex, endIndex); For some reason, the format() overload with the FieldPosition argument is not included in the android platform. Any ideas of how to do this in another way? Is there any easy way to tokenize the pattern string? Any other ideas?

    Read the article

  • Sending Email over VPN SmtpException net_io_connectionclosed

    - by Holy Christ
    I am sending an email from a WPF application. When sending as a domain user on the network, the emails sends as expected. However, when I attempt to send email over a VPN connection, I get the following exception: Exception: System.Net.Mail.SmtpException: Failure sending mail. --- System.IO.IOException: Unable to read data from the transport connection: net_io_connectionclosed. at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller) at System.Net.Mail.SmtpConnection.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpTransport.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpClient.GetConnection() at System.Net.Mail.SmtpClient.Send(MailMessage message) I have tried using impersonation as well as setting the Credentials on the SmtpClient. Neither seem to work: using (new ImpersonateUser("myUser", "MYDOMAIN", "myPass")) { var client = new SmtpClient("myhost.com"); client.UseDefaultCredentials = true; client.Credentials = new NetworkCredential("myUser", "myPass", "MYDOMAIN"); client.Send(mailMessage); } I've also tried using Wireshark to view the message over the wire, but I don't know enough about SMTP to know what I'm looking for. One other variable is that the machine I'm using on the VPN is Vista Business and the machine on the network is Win7. I don't think it's related, but then I wouldn't be asking if I knew the issue! :) Any ideas?

    Read the article

  • Convenient way to do "wrong way rebase" in git?

    - by Kaz
    I want to pull in newer commits from master into topic, but not in such a way that topic changes are replayed over top of master, but rather vice versa. I want the new changes from master to be played on top of topic, and the result to be installed as the new topic head. I can get exactly the right object if I rebase master to topic, the only problem being that the object is installed as the new head of master rather than topic. Is there some nice way to do this without manually shuffling around temporary head pointers? Edit: Here is how it can be achieved using a temporary branch head, but it's clumsy: git checkout master git checkout -b temp # temp points to master git rebase topic # topic is brought into temp, temp changes played on top Now we have the object we want, and it's pointed at by temp. git checkout topic git reset --hard temp Now topic has it; and all that is left is to tidy up by deleting temp: git branch -d temp Another way is to to do away with temp and just rebase master, and then reset topic to master. Finally, reset master back to what it was by pulling its old head from the reflog, or a cut-and-paste buffer.

    Read the article

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • HTTP Handler error when downloading files - SSL

    - by Chiefy
    Ok big problem as this is affecting two projects on our new server. We have a file that is downloaded by users, the files are downloaded using a HTTPHandler. Since moving the site to the server and setting SSL the downloads have stopped working and we get an error message "Unable to download DownloadDocument.ashx" from site". DownloadDocument.ashx is the handler page that is set in the web.config and the button that goes there is a hyperlink with the id of the document as a querystring. Ive read the article on http://support.microsoft.com/kb/316431 and read a few other requests on this site but nothing seems to be working. This problem only happens in IE and works fine when I run it on the server in http instead of https. public override void HandleRequest(HttpContext context) { Guid guid = new Guid(context.Request.QueryString["ID"]); DataTable dt = Documents.GetDocument(guid); if (dt != null) { context.Response.Cache.SetCacheability(HttpCacheability.Private); context.Response.AddHeader("content-disposition", string.Format("attachment; filename={0}", dt.Rows[0]["DocumentName"].ToString())); context.Response.AddHeader("Content-Transfer-Encoding", "binary"); context.Response.AddHeader("Content-Length", ((byte[])dt.Rows[0]["Document"]).Length.ToString()); context.Response.ContentType = string.Format("application/{0}", dt.Rows[0]["Extension"].ToString().Remove(0, 1)); context.Response.Buffer = true; context.Response.BinaryWrite((byte[])dt.Rows[0]["Document"]); context.Response.Flush(); context.Response.End(); } } The above is my current code for the request. Ive used the base handler on http://haacked.com/archive/2005/03/17/AnAbstractBoilerplateHttpHandler.aspx. Any ideas on what this might be and how we can fix it. Thanks in advance for all responses.

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • How can I create a DOTNET COM interop assembly for Classic ASP that does not sequentially block othe

    - by Alex Waddell
    Setup -- Create a simple COM addin through DOTNET/C# that does nothing but sleep on the current thread for 5 seconds. namespace ComTest { [ComVisible(true)] [ProgId("ComTester.Tester")] [Guid("D4D0BF9C-C169-4e5f-B28B-AFA194B29340")] [ClassInterface(ClassInterfaceType.AutoDual)] public class Tester { [STAThread()] public string Test() { System.Threading.Thread.Sleep(5000); return DateTime.Now.ToString(); } } } From an ASP page, call the test component: <%@ Language=VBScript %> <%option explicit%> <%response.Buffer=false%> <% dim test set test = CreateObject("ComTester.Tester") %> <HTML> <HEAD></HEAD> <BODY> <% Response.Write(test.Test()) set test = nothing %> </BODY> </HTML> When run on a windows 2003 server, the test.asp page blocks ALL OTHER threads in the site while the COM components sleeps. How can I create a COM component for ASP that does not block all ASP worker threads?

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >