Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 144/320 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Delphi Unicode String Type Stored Directly at its Address (or "Unicode ShortString")

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer (or a single byte would be enough, actually) containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). Update: The most important reason for this is probably that it would be very problematic if sizeof(string) were variable. I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right? Update I will, among other things, need to use these strings in packed records. I also need manually to read/write these strings to files/the heap. I could live with fixed-size strings, such as <= 128 characters, and I could redesign the problem so it will work with null-terminated strings. But PChar will not work, for sizeof(PChar) = 1 - it's merely an address. The approach I eventually settled for was to use a static array of bytes. I will post my implementation as a solution later today.

    Read the article

  • cookies handling on webrequest and response

    - by manish patel
    I have created a application that has a function mainpost. It is created to post data on a https sites. Here I want to handle cookies in this function. How can I do this task? public string Mainpost(string website, string content) { // this is what we are sending string post_data = content; // this is where we will send it string uri = website; // create a request HttpWebRequest request = (HttpWebRequest) WebRequest.Create(uri); request.KeepAlive = false; request.ProtocolVersion = HttpVersion.Version10; request.Method = "POST"; // turn our request string into a byte stream byte[] postBytes = Encoding.ASCII.GetBytes(post_data); // this is important - make sure you specify type this way request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postBytes.Length; Stream requestStream = request.GetRequestStream(); // now send it requestStream.Write(postBytes, 0, postBytes.Length); requestStream.Close(); // grab te response and print it out to the console along with the status // code HttpWebResponse response = (HttpWebResponse)request.GetResponse(); string str = (new StreamReader(response.GetResponseStream()).ReadToEnd()); Console.WriteLine(response.StatusCode); return str; }

    Read the article

  • Convert asp.net webforms logic to asp.net MVC

    - by gmcalab
    I had this code in an old asp.net webforms app to take a MemoryStream and pass it as the Response showing a PDF as the response. I am now working with an asp.net MVC application and looking to do this this same thing, but how should I go about showing the MemoryStream as PDF using MVC? Here's my asp.net webforms code: private void ShowPDF(MemoryStream ms) { try { //get byte array of pdf in memory byte[] fileArray = ms.ToArray(); //send file to the user Page.Response.Cache.SetCacheability(HttpCacheability.NoCache); Page.Response.Buffer = true; Response.Clear(); Response.ClearContent(); Response.ClearHeaders(); Response.Charset = string.Empty; Response.ContentType = "application/pdf"; Response.AddHeader("content-length", fileArray.Length.ToString()); Response.AddHeader("Content-Disposition", "attachment;filename=TID.pdf;"); Response.BinaryWrite(fileArray); Response.Flush(); Response.Close(); } catch { // and boom goes the dynamite... } }

    Read the article

  • NetworkStream.Read delay .Net

    - by Gilbes
    I have a class that inherits from TcpClient. In that class I have a method to process responses. In that method I call I get the NetworkStream with MyBase.GetStream and call Read on it. This works fine, excpet the first call to read blocks too long. And by too long I mean that the socket has recieved plenty of data, but won't read it until some arbitrary limit is reached. I can see that it has recieved plenty of data using the packet sniffer WireShark. I have set the recieve buffer to small amounts, and very small amounts (like just a few bytes) to no avail. I have done the same with the buffer byte array I pass to the read method, and it still delays. Or to put it another way. I am download 600k. The download takes 5 seconds (at a little over 100k/second connection to the server which makes sense). The initial Read call takes 2-3 seconds and tells me only 256 bytes are availble (256 is the Recieve buffer and the size of the array I read in to). Then magically, the other few hundred thousand bytes can be read in 256 byte chunks in only a few process ticks each. Using a packet sniffer, I know that during those initial 2-3 seconds, the socket has recieved much more than just 256 bytes. My connection wasn't .25k/second for 3 seconds and then 400k for 2 seconds. How do I get the bytes from a socket as they come in?

    Read the article

  • How can i zip files in Java and not include files paths

    - by Ignacio
    For example, i want to zip a file stored in /Users/me/Desktop/image.jpg I maded this method: public static Boolean generateZipFile(ArrayList<String> sourcesFilenames, String destinationDir, String zipFilename){ // Create a buffer for reading the files byte[] buf = new byte[1024]; try { // VER SI HAY QUE CREAR EL ROOT PATH boolean result = (new File(destinationDir)).mkdirs(); String zipFullFilename = destinationDir + "/" + zipFilename ; System.out.println(result); // Create the ZIP file ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFullFilename)); // Compress the files for (String filename: sourcesFilenames) { FileInputStream in = new FileInputStream(filename); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(filename)); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file out.close(); return true; } catch (IOException e) { return false; } } But when i extract the file, the unzipped files have the full path. I don't want the full path of each file in the zip i only want the filename. How can i made this?

    Read the article

  • Volatile fields in C#

    - by Danny Chen
    From the specification 10.5.3 Volatile fields: The type of a volatile field must be one of the following: A reference-type. The type byte, sbyte, short, ushort, int, uint, char, float, bool, System.IntPtr, or System.UIntPtr. An enum-type having an enum base type of byte, sbyte, short, ushort, int, or uint. First I want to confirm my understanding is correct: I guess the above types can be volatile because they are stored as a 4-bytes unit in memory(for reference types because of its address), which guarantees the read/write operation is atomic. A double/long/etc type can't be volatile because they are not atomic reading/writing since they are more than 4 bytes in memory. Is my understanding correct? And the second, if the first guess is correct, why a user defined struct with only one int field in it(or something similar, 4 bytes is ok) can't be volatile? Theoretically it's atomic right? Or it's not allowed simply because that all user defined structs(which is possibly more than 4 bytes) are not allowed to volatile by design?

    Read the article

  • Is my method for avoiding dynamic_cast<> faster than dynamic_cast<> itself ?

    - by ereOn
    Hi, I was answering a question a few minutes ago and it raised to me another one: In one of my projects, I do some network message parsing. The messages are in the form of: [1 byte message type][2 bytes payload length][x bytes payload] The format and content of the payload are determined by the message type. I have a class hierarchy, based on a common class Message. To instanciate my messages, i have a static parsing method which gives back a Message* depending on the message type byte. Something like: Message* parse(const char* frame) { // This is sample code, in real life I obviously check that the buffer // is not NULL, and the size, and so on. switch(frame[0]) { case 0x01: return new FooMessage(); case 0x02: return new BarMessage(); } // Throw an exception here because the mesage type is unknown. } I sometimes need to access the methods of the subclasses. Since my network message handling must be fast, I decived to avoid dynamic_cast<> and I added a method to the base Message class that gives back the message type. Depending on this return value, I use a static_cast<> to the right child type instead. I did this mainly because I was told once that dynamic_cast<> was slow. However, I don't know exactly what it really does and how slow it is, thus, my method might be as just as slow (or slower) but far more complicated. What do you guys think of this design ? Is it common ? Is it really faster than using dynamic_cast<> ? Any detailed explanation of what happen under the hood when one use dynamic_cast<> is welcome !

    Read the article

  • How to include external classes in a GAE deployment?

    - by kodra
    I am using the Google plug-in for Eclipse and have the following problem: The project consists of a GWT based GUI talking to a server running on GAE and using JPA. Additionally there is a project to migrate the legacy data to the new datastore. Since these both project use common data model, I have extracted a set of interfaces and enums into a separate project and set the other two projects dependencies on it. The Java App project seems to work, but the GWT/GAE only works if I manually copy the classes into the WEB-INF/classes directory. Obviously this is only working when using the housted mode. Anybody knows how to configure such a multi project setup in Eclipse? Also, I am not sure if the multi project layout is the best solution. The set of common model objects is used in all 3 areas: user client (GWT project compiling standard folders client and shared) server side (providing services for GWT-RPC, uploading and different feeds) migration application (posting the legacy data to the upload servlet) What are the architectural options to keep the amount of duplicated classes on minimum?

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • .net IHTTPHandler Streaming SQL Binary Data

    - by Yisman
    Hello everybody I am trying to implement an ihttphandeler for streaming files. files may be tiny thumbnails or gigantic movies the binaries r stored in sql server i looked at a lot of code online but something does not make sense isnt streaming supposed to read the data piece by piece and move it over the line? most of the code seems to first read the whole field from mssql to memory and then use streaming for the output writing wouldnt it b more eficient to actually stream from disk directly to http byte by byte (or buffered chunks?) heres my code so far but cant figure out the correct combination of the sqlreader mode and the stream object and the writing system Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest context.Response.BufferOutput = False Dim FileField=safeparam(context.Request.QueryString("FileField")) Dim FileTable=safeparam(context.Request.QueryString("FileTable")) Dim KeyField=safeparam(context.Request.QueryString("KeyField")) Dim FileKey=safeparam(context.Request.QueryString("FileKey")) Using connection As New SqlConnection(ConfigurationManager.ConnectionStrings("Main").ConnectionString) Using command As New SqlCommand("SELECT " & FileField & "Bytes," & FileField & "Type FROM " & FileTable & " WHERE " & KeyField & "=" & FileKey, connection) command.CommandType = Data.CommandType.Text enbd using end using end sub please be aware that this sql command also returns the file extension (pdf,jpg,doc...) in the second field of the query thank you all very much

    Read the article

  • how can i change selected value of drop list dynamically

    - by Deepak Gupta
    i want to pick the value from text box and then change the value of dropdown list according to that value <html> <head> <script> function change() { var value = document.getElementById('text').value; document.getElementById("Model").selectedvalue = value } </script> </head> <body> <asp:DropDownList ID="Model" AutoPostBack="false" runat="server" CssClass="styled"> <asp:ListItem Value="None">None</asp:ListItem> <asp:ListItem Value="Enum">Enum</asp:ListItem> <asp:ListItem Value="Sum">Sum</asp:ListItem> <asp:ListItem Value="Multi">Multi</asp:ListItem> <asp:ListItem Value="Xaxis">Xaxis</asp:ListItem> </asp:DropDownList> <input id="text" type="text"/> <input type="button" onclick="change();"/> </body> <html>

    Read the article

  • Sharepoint NewForm adding attachments programatically

    - by CodeSpeaker
    I have a list with a custom form which contains a custom file upload control. As soon as the user selects a file and clicks upload, i want this file to go directly to the attachments list within that listitem. However, when adding the file to SPContext.Current.ListItem.Attachments on a new item, the attachment wont show up in the list after saving. If i instead use item.Update() on the new item after adding the attachment i get an error in Sharepoint, but when i then go back to the list, the item is there with its attachment. It seems like its trying to create 2 new entries at once when i save (item.Update) which results in the second of those crashing. What would be the correct way to add attachments this way? oSPWeb.AllowUnsafeUpdates = true; // Get the List item SPListItem listItem = SPContext.Current.ListItem; // Get the Attachment collection SPAttachmentCollection attachmentCollection = listItem.Attachments; Stream attachmentStream; Byte[] attachmentContent; // Get the file from the file upload control if (fileUpload.HasFile) { attachmentStream = fileUpload.PostedFile.InputStream; attachmentContent = new Byte[attachmentStream.Length]; attachmentStream.Read(attachmentContent, 0, (int)attachmentStream.Length); attachmentStream.Close(); attachmentStream.Dispose(); // Add the file to the attachment collection attachmentCollection.Add(fileUpload.FileName, attachmentContent); } // Update th list item listItem.Update();

    Read the article

  • Use of Syntactic Sugar / Built in Functionality

    - by Kyle Rozendo
    I was busy looking deeper into things like multi-threading and deadlocking etc. The book is aimed at both pseudo-code and C code and I was busy looking at implementations for things such as Mutex locks and Monitors. This brought to mind the following; in C# and in fact .NET we have a lot of syntactic sugar for doing things. For instance (.NET 3.5): lock(obj) { body } Is identical to: var temp = obj; Monitor.Enter(temp); try { body } finally { Monitor.Exit(temp); } There are other examples of course, such as the using() {} construct etc. My question is when is it more applicable to "go it alone" and literally code things oneself than to use the "syntactic sugar" in the language? Should one ever use their own ways rather than those of people who are more experienced in the language you're coding in? I recall having to not use a Process object in a using block to help with some multi-threaded issues and infinite looping before. I still feel dirty for not having the using construct in there. Thanks, Kyle

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • How do I get rid of these warnings?

    - by Brian Postow
    This is really several questions, but anyway... I'm working with a big project in XCode, relatively recently ported from MetroWorks (Yes, really) and there's a bunch of warnings that I want to get rid of. Every so often an IMPORTANT warning comes up, but I never look at them because there's too many garbage ones. So, if I can either figure out how to get XCode to stop giving the warning, or actually fix the problem, that would be great. Here are the warnings: It claims that <map.h> is antiquated. However, when I replace it with <map> my files don't compile. Evidently, there's something in map.h that isn't in map... this decimal constant is unsigned only in ISO C90 This is a large number being compared to an unsigned long. I have even cast it, with no effect. enumeral mismatch in conditional expression: <anonymous enum> vs <anonymous enum> This appears to be from a ?: operator. Possibly that the then and else branches don't evaluate to the same type? Except that in at least one case, it's (matchVp == NULL ? noErr : dupFNErr) And since those are both of type OSErr, which is mac defined... I'm not sure what's up. It also seems to come up when I have other pairs of mac constants... multi-character character constant This one is obvious. The problem is that I actually NEED multi-character constants... -fwritable-strings not compatible with literal CF/NSString I unchecked the "Strings are Read-Only" box in both the project and target settings... and it seems to have had no effect...

    Read the article

  • Is it possible to use SqlGeography with Linq to Sql?

    - by cofiem
    I've been having quite a few problems trying to use Microsoft.SqlServer.Types.SqlGeography. I know full well that support for this in Ling to Sql is not great. I've tried numerous ways, beginning with what would the expected way (Database type of geography, CLR type of SqlGeography). This produces the NotSupportedException, which is widely discussed via blogs. I've then gone down the path of treating the geography column as a varbinary(max), as geography is a UDT stored as binary. This seems to work fine (with some binary reading and writing extension methods). However, I'm now running into a rather obscure issue, which does not seem to have happened to many other people. System.InvalidCastException: Unable to cast object of type 'Microsoft.SqlServer.Types.SqlGeography' to type 'System.Byte[]'. This error is thrown from an ObjectMaterializer when iterating through a query. It seems to only occur when the tables containing geography columns are included in a query implicitly (ie. using the EntityRef<> properties to do joins). System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() My question: If I'm retrieving the geography column as varbinary(max), I might expect the reverse error: can't cast byte[] to SqlGeography. That I would understand. This I don't. I do have some properies on the partial LINQ to SQL classes that hide the binary conversion... could those be the issue? Any help appreciated, and I know there's probably not enough information.

    Read the article

  • comparing salt and hashed passwords during login doesn't seem work right....

    - by Pandiya Chendur
    I stored salt and hash values of password during user registration... But during their login i then salt and hash the password given by the user, what happens is a new salt and a new hash is generated.... string password = collection["Password"]; reg.PasswordSalt = CreateSalt(6); reg.PasswordHash = CreatePasswordHash(password, reg.PasswordSalt); These statements are in both registration and login.... salt and hash during registration was eVSJE84W and 18DE22FED8C378DB7716B0E4B6C0BA54167315A2 During login it was 4YDIeARH and 12E3C1F4F4CFE04EA973D7C65A09A78E2D80AAC7..... Any suggestion.... public static string CreateSalt(int size) { //Generate a cryptographic random number. RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); byte[] buff = new byte[size]; rng.GetBytes(buff); // Return a Base64 string representation of the random number. return Convert.ToBase64String(buff); } public static string CreatePasswordHash(string pwd, string salt) { string saltAndPwd = String.Concat(pwd, salt); string hashedPwd = FormsAuthentication.HashPasswordForStoringInConfigFile( saltAndPwd, "sha1"); return hashedPwd; }

    Read the article

  • Java Process.waitFor() and IO streams

    - by lynks
    I have the following code; String[] cmd = { "bash", "-c", "~/path/to/script.sh" }; Process p = Runtime.getRuntime().exec(cmd); PipeThread a = new PipeThread(p.getInputStream(), System.out); PipeThread b = new PipeThread(p.getErrorStream(), System.err); p.waitFor(); a.die(); b.die(); The PipeThread class is quite simple so I will include it in full; public class PipeThread implements Runnable { private BufferedInputStream in; private BufferedOutputStream out; public Thread thread; private boolean die = false; public PipeThread(InputStream i, OutputStream o) { in = new BufferedInputStream(i); out = new BufferedOutputStream(o); thread = new Thread(this); thread.start(); } public void die() { die = true; } public void run() { try { byte[] b = new byte[1024]; while(!die) { int x = in.read(b, 0, 1024); if(x > 0) out.write(b, 0, x); else die(); out.flush(); } } catch(Exception e) { e.printStackTrace(); } try { in.close(); out.close(); } catch(Exception e) { } } } My problem is this; p.waitFor() blocks endlessly, even after the subprocess has terminated. If I do not create the pair of PipeThread instances, then p.waitFor() works perfectly. What is it about the piping of io streams that is causing p.waitFor() to continue blocking? I'm confused as I thought the IO streams would be passive, unable to keep a process alive, or to make Java think the process is still alive.

    Read the article

  • F# Inline Function Specialization

    - by Ben
    Hi, My current project involves lexing and parsing script code, and as such I'm using fslex and fsyacc. Fslex LexBuffers can come in either LexBuffer<char> and LexBuffer<byte> varieties, and I'd like to have the option to use both. In order to user both, I need a lexeme function of type ^buf - string. Thus far, my attempts at specialization have looked like: let inline lexeme (lexbuf: ^buf) : ^buf -> string where ^buf : (member Lexeme: char array) = new System.String(lexbuf.Lexeme) let inline lexeme (lexbuf: ^buf) : ^buf -> string where ^buf : (member Lexeme: byte array) = System.Text.Encoding.UTF8.GetString(lexbuf.Lexeme) I'm getting a type error stating that the function body should be of type ^buf -> string, but the inferred type is just string. Clearly, I'm doing something (majorly?) wrong. Is what I'm attempting even possible in F#? If so, can someone point me to the proper path? Thanks!

    Read the article

  • ASP.NET Response Filter to Reformat the rendered output of ASPX pages?

    - by PropellerHead
    I've created a simple HttpModule and response stream to reformat the rendered output of web pages (see code snippets below). In the HttpModule I set the Response.Filter to my PageStream: m_Application.Context.Response.Filter = new PageStream(m_Application.Context); In the PageStream I overwrite the Write method in order to do my reformatting of the rendered output: public override void Write(byte[] buffer, int offset, int count) { string html = System.Text.Encoding.UTF8.GetString(buffer); //Do some string resplace operations here... byte[] input = System.Text.Encoding.UTF8.GetBytes(html); m_DefaultStream.Write(input, 0, input.Length); } And this work fine when using it on simple HTML pages (.html), but when I use this method on ASPX pages (.aspx), the Write method is called several times, splitting up the reformatting into different steps, and potentially destroying the string replacement operations. How do I solve this? Is there a way to let the ASPX page NOT call Write several times, e.g. by changing its buffer size, or have I chosen the wrong approach entirely, by using this Response.Filter method to manipulate the rendered output?

    Read the article

  • Windsor IHandlerSelector in RIA Services Visual Studio 2010 Beta2

    - by Savvas Sopiadis
    Hi everybody! I want to implement multi tenancy using Windsor and i don't know how to handle this situation: i succesfully used this technique in plain ASP.NET MVC projects and thought incorporating in a RIA Services project would be similar. So i used IHandlerSelector, registered some components and wrote an ASP.NET MVC view to verify it works in a plain ASP.NET MVC environment. And it did! Next step was to create a DomainService which got an IRepository injected in the constructor. This service is hosted in the ASP.NET MVC application. And it actually ... works:i can get data out of it to a Silverlight application. Sample snippet: public OrganizationDomainService(IRepository<Culture> cultureRepository) { this.cultureRepository = cultureRepository; } Last step is to see if it works multi-tenant-like: it does not! The weird thing is this: using some line of code and writing debug messages in a log file i verified that the correct handler is selected! BUT this handler seems not to be injected in the DomainService. I ALWAYS get the first handler (that's the logic in my SelectHandler) Can anybody verify this behavior? Is injection not working in RIA Services? Or am i missing something basic?? Development environment: Visual Studio 2010 Beta2 Thanks in advance

    Read the article

  • Apache HttpClient 4.0. Weird behavior.

    - by Mikhail T
    Hello. I'm using Apache HttpClient 4.0 for my web crawler. The behavior i found strange is: i'm trying to get page via HTTP GET method and getting response about 404 HTTP error. But if i try to get that page using browser it's done successfully. Details: 1. I upload multipart form to server this way: HttpPost httpPost = new HttpPost("http://[host here]/in.php"); MultipartEntity entity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE); entity.addPart("method", new StringBody("post")); entity.addPart("key", new StringBody("223fwe0923fjf23")); FileBody fileBody = new FileBody(new File("photo.jpg"), "image/jpeg"); entity.addPart("file", fileBody); httpPost.setEntity(entity); HttpResponse response = httpClient.execute(httpPost); HttpEntity result = response.getEntity(); String responseString = ""; if (result != null) { InputStream inputStream = result.getContent(); byte[] buffer = new byte[1024]; while(inputStream.read(buffer) > 0) responseString += new String(buffer); result.consumeContent(); } Uppload succefully ends. I'm getting some results from web server: HttpGet httpGet = new HttpGet("http://[host here]/res.php?key="+myKey+"&action=get&id="+id); HttpResponse response = httpClient.execute(httpGet); HttpEntity entity = response.getEntity(); I'm getting ClientProtocolException while execute method run. I was debugging this situation with log4j. Server answers "404 Not Found". But my browser loads me that page with no problem. Can anybody help me? Thank you.

    Read the article

  • Weird camera Intent behavior

    - by David Erosa
    Hi all. I'm invoking the MediaStore.ACTION_IMAGE_CAPTURE intent with the MediaStore.EXTRA_OUTPUT extra so that it does save the image to that file. On the onActivityResult I can check that the image is being saved in the intended file, which is correct. The weird thing is that anyhow, the image is also saved in a file named something like "/sdcard/Pictures/Camera/1298041488657.jpg" (epoch time in which the image was taken). I've checked the Camera app source (froyo-release branch) and I'm almost sure that the code path is correct and wouldn't have to save the image, but I'm a noob and I'm not completly sure. AFAIK, the image saving process starts with this callback (comments are mine): private final class JpegPictureCallback implements PictureCallback { ... public void onPictureTaken(...){ ... // This is where the image is passed back to the invoking activity. mImageCapture.storeImage(jpegData, camera, mLocation); ... public void storeImage(final byte[] data, android.hardware.Camera camera, Location loc) { if (!mIsImageCaptureIntent) { // Am i an intent? int degree = storeImage(data, loc); // THIS SHOULD NOT BE CALLED WITHIN THE CAPTURE INTENT!! ....... // An finally: private int storeImage(byte[] data, Location loc) { try { long dateTaken = System.currentTimeMillis(); String title = createName(dateTaken); String filename = title + ".jpg"; // Eureka, timestamp filename! ... So, I'm receiving the correct data, but it's also being saved in the "storeImage(data, loc);" method call, which should not be called... It'd not be a problem if I could get the newly created filename from the intent result data, but I can't. When I found this out, I found about 20 image files from my tests that I didn't know were on my sdcard :) I'm getting this behavior both with my Nexus One with Froyo and my Huawei U8110 with Eclair. Could please anyone enlight me? Thanks a lot.

    Read the article

  • I am looking for an actual functional web browser control for .NET, maybe a C++ library

    - by Joshua
    I am trying to emulate a web browser in order to execute JavaScript code and then parse the DOM. The System.Windows.Forms.WebBrowser object does not give me the functionality I need. It let's me set the headers, but you cannot set the proxy or clear cookies. Well you can, but it is not ideal and messes with IE's settings. I've been extending the WebBrowser control pinvoking native windows functions so far, but it is really one hack on top of another. I can mess with the proxy and also clear cookies and such, but this control has its issues as I mentioned. I found something called WebKit .NET (http://webkitdotnet.sourceforge.net/), but I don't see support for setting proxies or cookie manipulation. Can someone recommend a c++/.NET/whatever library to do this: Basically tell me what I need to do to get an interface to similar this in .NET: // this should probably pause the current thread for the max timeout, // throw an exception on failure or return null w/e, VAGUELY similar to this string WebBrowserEmu::FetchBrowserParsedHtml(Uri url, WebProxy p, int timeoutSeconds, byte[] headers, byte[] postdata); void WebBrowserEmu::ClearCookies(); I am not responsible for my actions.

    Read the article

  • Bluetooth in Java Mobile: Handling connections that go out of range

    - by Albus Dumbledore
    I am trying to implement a server-client connection over the spp. After initializing the server, I start a thread that first listens for clients and then receives data from them. It looks like that: public final void run() { while (alive) { try { /* * Await client connection */ System.out.println("Awaiting client connection..."); client = server.acceptAndOpen(); /* * Start receiving data */ int read; byte[] buffer = new byte[128]; DataInputStream receive = client.openDataInputStream(); try { while ((read = receive.read(buffer)) > 0) { System.out.println("[Recieved]: " + new String(buffer, 0, read)); if (!alive) { return; } } } finally { System.out.println("Closing connection..."); receive.close(); } } catch (IOException e){ e.printStackTrace(); } } } It's working fine for I am able to receive messages. What's troubling me is how would the thread eventually die when a device goes out of range? Firstly, the call to receive.read(buffer) blocks so that the thread waits until it receives any data. If the device goes out of range, it would never proceed onward to check if meanwhile it has been interrupted. Secondly, it would never close the connection, i.e. the server would not accept the device once it goes back in range. Thanks! Any ideas would be highly appreciated! Merry Christmas!

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >