Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 265/320 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Where to put external archives to configure running in Eclipse?

    - by Buggieboy
    As a Java/Eclipse noob coming from a Visual Studio .NET background, I'm trying to understand how to set up my run/debug environment in the Eclipse IDE so I can start stepping through code. I have built my application with separate src and bin hierarchies for each library project, and each project has its own jar. For example, in a Windows environment, I have something like: c:\myapp\myapp_main\src\com\mycorp\myapp\main ...and a parallel "bin" tree like this: c:\myapp\myapp_main\bin\com\mycorp\myapp\main Other supporting projects are, similarly: **c:\myapp\myapp_util\src\com\mycorp\myapp\uti**l (and a parallel "bin" tree.) ... etc. So, I end up with, e.g., myapp_util.jar in the ...\myapp_util\bin... path and then add that as an external archive to my myapp_main project. I also use utilities like gluegen-rt.jar, which I add ad external dependencies to the projects requiring them. I have been able to run outside of the Eclipse environment, by copying all my project jars, gluegen-rt DLL, etc., into a "lib" subfolder of some directory and executing something like: java -Djava.library.path=lib -DfullScreen=false -cp lib/gluegen-rt.jar;lib/myapp_main.jar;lib/myapp_util.jar; com.mycorp.myapp.Main When I first pressed F11 to debug, however, I got a message about something like /com/sun/../glugen... not being found by the class loader. So, to debug, or even just run, in Ecplipse, I tried setting up my VM arguments in the Galileo Debug - (Run/Debug) Configurations to be the command line above, beginning at "-Djava.libary.path...". I've put a lib subdirectory - just like the above with all jars and the native gluegen DLL - in various places, such as beneath the folder that my main jar is built in and as a subfolder of my Ecplipse starting workspace folder, but now Eclipse can't find the main class: java.lang.NoClassDefFoundError: com.mycorp.myapp.Main Although the Classpath says that it is using the "default classpath", whatever that is. Bottom line, how do I assemble the constituent files of a multi-project application so that I can run or debug in Ecplipse?

    Read the article

  • Error building Visual Studio 2010 Silverlight 4 projects on Windows 7 with XP Mode

    - by Kevin Dente
    I installed Visual Studio 2010 Beta 2 in an XP Mode VM on Windows 7. Then I created a trivial Silverlight 4 (beta) project and tried to build it. I get the following error: Error 1 The "ValidateXaml" task failed unexpectedly. System.IO.FileLoadException: Could not load file or assembly 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) File name: 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) I believe this is related to the fact that XP Mode redirects the My Documents folder to the host, turning it into a network share location, and some sort of CAS / security policy is being triggered. Anyone know how to fix it?

    Read the article

  • Android, FragmentActivity and prevent Swipe

    - by FIG-GHD742
    I use android.support.v4.app.FragmentActivity for create a app with multi fragment/panels that can be access by drag/swipe between different part of the app. In one of my fragment I has a zoomable view and my problem is in case I is on the zoomable view I will prevent the use for drag/swipe to a other fragment. I has try to hack into android.support.v4.view.ViewPager for get the action from on Touch event but not work. I has try all of this case but not work: (All code is a a part of subclass to android.support.v4.view.ViewPager) Case 1: // Not working @Override protected void onPageScrolled(int position, float offset, int offsetPixels) { if (isPreventDrag()) { super.onPageScrolled(position, 1, 0); } else { super.onPageScrolled(position, offset, offsetPixels); } } Case 2: // Work but stop all event include the event to the target image view. @Override public boolean onInterceptTouchEvent(MotionEvent ev) { switch (ev.getAction()) { case MotionEvent.ACTION_DOWN: lastX = ev.getX(); // float lockScroll = false; return super.onInterceptTouchEvent(ev); case MotionEvent.ACTION_MOVE: this.lockScroll = this.isPreventDrag(); break; } if (lockScroll) { ev.setLocation(lastX, ev.getY()); return super.onInterceptTouchEvent(ev); } else { return super.onInterceptTouchEvent(ev); } } Case 3: // Work good, but by some unknown error I can drag the screen // some pixels before this stop the event. @Override public boolean onTouchEvent(MotionEvent ev) { if (this.isPreventDrag()) { return true; } else { return super.onTouchEvent(ev); } } I want a easy way to deactivate stop or deactivate if the use is allow to switch to a other Fragment. Here is a working code for me, I don't know what error I do before. // This work for me, @Override public boolean onInterceptTouchEvent(MotionEvent ev) { if (this.isPreventDrag()) { return false; } else { return super.onInterceptTouchEvent(ev); } }

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

  • Open Source CMS (.Net vs Java)

    - by CmsAndDotNetKindaGuy
    I must say up-front that this is not a strictly programming-related question and a very opinionated one. I am the lead developer of the dominant .Net CMS in my country and I don't like my own product :). Managerial decisions over what is important or not and large chunks of legacy code done before I joined gives me headache every day I go for work. Anyway, having a vast amount of experience in web industry and a very good grasp of C# and programming practices I'm designing my own CMS the past few months. Now the problem is that I'm an open source kinda guy so I have a dilemma. Should I use C# and .Net which has crippled multi-platform support or should I drop .Net entirely and start learning Java where I can create a truly open-source and cross-platform CMS? Please don't answer that I should contribute to an existing open source CMS. I ruled that out after spending a great deal of time searching for something similar in structure to what I have in mind. I am not a native speaker, so feel free to correct my syntax or rephrase my question.

    Read the article

  • [C#] Improving method to read signed 8-bit integers from hexadecimal.

    - by JYelton
    Scenario: I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing. Current Code: string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127 int v; for (int x = 0; x < strData.Length/2; x++) { v = HexToInt(strData.Substring(x * 2, 2)); Console.WriteLine(v); // do stuff with v } private int HexToInt(string _hexData) { string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0'); int i = Convert.ToInt32(strBinary.Substring(1, 7), 2); i = (strBinary.Substring(0, 1) == "0" ? i : -i); return i; } Question: Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?

    Read the article

  • HP QTP 10: Web-app testing - SomeObj.FireEvent("OnCLick") works, SomeObj.Object.FireEvent("OnCLick") doesn't

    - by Vitaliy
    Hi all! I have rich web app btuil with ExtJS. It has multi-select list control (created with JS+CSS). I want to click on some item in that list using HP QuickTest Pro 10 with Internet Explorer 6. I added that item into Object repository and found that following code works - selects some item: Browser("blah").Page("blah").WebElement("SomeElem").Click next code also works: Browser("blah").Page("blah").WebElement("SomeElem").FireEvent("onMouseDown") Browser("blah").Page("blah").WebElement("SomeElem").FireEvent("onMouseUp") Browser("blah").Page("blah").WebElement("SomeElem").FireEvent("onClick") But I want to select several items using shift+click method. I don't know to do that :( So I have a few questions: How can perform click with mouse on several web elements with Shift key pressed? I tried to do that using CreateEventObject + shiftKey set to true, but the method (perform fireEvent on DOM object, not object from Object repository) doesn't work: Browser("blah").Page("blah").WebElement("SomeElem").Object.FireEvent("onClick") What the difference between WebElement("Element").FireEvent("OnClick") and WebElement("Element").Object.FireEvent("OnClick") ? Plsease, help someone, because I fought with that problem a lot, but had no result. Thanks!

    Read the article

  • Improving method to read signed 8-bit integers from hexadecimal.

    - by JYelton
    Scenario: I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing. Current Code: string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127 int v; for (int x = 0; x < strData.Length/2; x++) { v = HexToInt(strData.Substring(x * 2, 2)); Console.WriteLine(v); // do stuff with v } private int HexToInt(string _hexData) { string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0'); int i = Convert.ToInt32(strBinary.Substring(1, 7), 2); i = (strBinary.Substring(0, 1) == "0" ? i : -i); return i; } Question: Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • How to Display a Bmp in a RTF control in VB.net

    - by Gerolkae
    I Started with this C# Question I'm trying to Display a bmp image inside a rtf Box for a Bot program I'm making. This function is supposed to convert a bitmap to rtf code whis is inserted to another rtf formatter srtring with additional text. Kind of like Smilies being used in a chat program. For some reason the output of this function gets rejected by the RTF Box and Vanishes completly. I'm not sure if it the way I'm converting the bmp to a Binary string or if its tied in with the header tags 'returns the RTF string representation of our picture Public Shared Function PictureToRTF(ByVal Bmp As Bitmap) As String Dim stream As New MemoryStream() Bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp) Dim bytes As Byte() = stream.ToArray() Dim str As String = BitConverter.ToString(bytes, 0).Replace("-", String.Empty) 'header to string we want to insert Using g As Graphics = Main.CreateGraphics() xDpi = g.DpiX yDpi = g.DpiY End Using Dim _rtf As New StringBuilder() ' Calculate the current width of the image in (0.01)mm Dim picw As Integer = CInt(Math.Round((Bmp.Width / xDpi) * HMM_PER_INCH)) ' Calculate the current height of the image in (0.01)mm Dim pich As Integer = CInt(Math.Round((Bmp.Height / yDpi) * HMM_PER_INCH)) ' Calculate the target width of the image in twips Dim picwgoal As Integer = CInt(Math.Round((Bmp.Width / xDpi) * TWIPS_PER_INCH)) ' Calculate the target height of the image in twips Dim pichgoal As Integer = CInt(Math.Round((Bmp.Height / yDpi) * TWIPS_PER_INCH)) ' Append values to RTF string _rtf.Append("{\pict\wbitmap0") _rtf.Append("\picw") _rtf.Append(Bmp.Width.ToString) ' _rtf.Append(picw.ToString) _rtf.Append("\pich") _rtf.Append(Bmp.Height.ToString) ' _rtf.Append(pich.ToString) _rtf.Append("\wbmbitspixel24\wbmplanes1") _rtf.Append("\wbmwidthbytes40") _rtf.Append("\picwgoal") _rtf.Append(picwgoal.ToString) _rtf.Append("\pichgoal") _rtf.Append(pichgoal.ToString) _rtf.Append("\bin ") _rtf.Append(str.ToLower & "}") Return _rtf.ToString End Function

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • WPF Spellcheck Engine takes up too much memory.

    - by Matt H.
    Each datatemplate in my WPF ItemsControl contains FIVE custom bindable richtextbox controls. It is a data-driven app that for authoring multiple-choice questions -- The question and four answer choices must all support: 1) Spell check 2) Rich formatting (otherwise I'd use regular textboxes) The spell check object in .NET 4 has a Friend constructor that takes a single argument of owner As TextBoxBase This means every richtextbox in the ItemsControl has 5 Spellcheck objects! This is the problem -- every spell check engine is consuming about 500k memory. So after you favor in the spellcheck, bindings, additional controls in the DataTemplate, etc.. a single multi choice question consumes more than 3MB memory. Users with 100--200 questions will quickly see the App raise it's memory consumption to 500+ MB. Management is definitely not OK with this. Is there a way to minimimze this problem? The best suggestion I've heard is to enable/disable spellcheck if the richtextbox is in the ItemsControl's scrollviewer: I haven't gotten an answer to how to go about it: (http://stackoverflow.com/questions/2869012/possible-to-implement-an-isviewportvisible-dependencyproperty-for-an-item-in-an-i) Any good ideas?

    Read the article

  • ASP.net download page

    - by Russel
    Hi I have a Reports.aspx ASP.NET page that allows users to download excel report files by clicking on several hyperlinks. When a report hyperlink is clicked, I open a new window using the javascript window.open method and navigate off to the download.aspx page. The code-behind for the download page creates a excel file on the fly using openxml(in memory) and send it back to the browser. Here is some code from the download.aspx page: byte[] outputFileBytes = CreateExcelReport().ToArray(); Response.Clear(); Response.BufferOutput = true; Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"; Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", "tempReport.xlsx")); Response.BinaryWrite(outputFileBytes); Response.Flush(); Response.Close(); Response.End(); My problem : Some of these reports take some time to generate. I would like to display a loading.gif file on my Reports.aspx page, while the download.aspx page is requested. Once the page request is completed, the loading.gif file should be made invisible. Is there a way to achieve this. Perhaps some kind of event. I have mootools to my disposal. Thanks PS. I know that generating reports like this is not ideal, but thats a different story all together...

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Q on Python serialization/deserialization

    - by neil
    What chances do I have to instantiate, keep and serialize/deserialize to/from binary data Python classes reflecting this pattern (adopted from RFC 2246 [TLS]): enum { apple, orange } VariantTag; struct { uint16 number; opaque string<0..10>; /* variable length */ } V1; struct { uint32 number; opaque string[10]; /* fixed length */ } V2; struct { select (VariantTag) { /* value of selector is implicit */ case apple: V1; /* VariantBody, tag = apple */ case orange: V2; /* VariantBody, tag = orange */ } variant_body; /* optional label on variant */ } VariantRecord; Basically I would have to define a (variant) class VariantRecord, which varies depending on the value of VariantTag. That's not that difficult. The challenge is to find a most generic way to build a class, which serializes/deserializes to and from a byte stream... Pickle, Google protocol buffer, marshal is all not an option. I made little success with having an explicit "def serialize" in my class, but I'm not very happy with it, because it's not generic enough. I hope I could express the problem. My current solution in case VariantTag = apple would look like this, but I don't like it too much import binascii import struct class VariantRecord(object): def __init__(self, number, opaque): self.number = number self.opaque = opaque def serialize(self): out = struct.pack('>HB%ds' % len(self.opaque), self.number, len(self.opaque), self.opaque) return out v = VariantRecord(10, 'Hello') print binascii.hexlify(v.serialize()) >> 000a0548656c6c6f Regards

    Read the article

  • How can I read a DBF file with incorrectly defined column data types using ADO.NET?

    - by Jason
    I have a several DBF files generated by a third party that I need to be able to query. I am having trouble because all of the column types have been defined as characters, but the data within some of these fields actually contain binary data. If I try to read these fields using an OleDbDataReader as anything other than a string or character array, I get an InvalidCastException thrown, but I need to be able to read them as a binary value or at least cast/convert them after they are read. The columns that actually DO contain text are being returned as expected. For example, the very first column is defined as a character field with a length of 2 bytes, but the field contains a 16-bit integer. I have written the following test code to read the first column and convert it to the appropriate data type, but the value is not coming out right. The first row of the database has a value of 17365 (0x43D5) in the first column. Running the following code, what I end up getting is 17215 (0x433F). I'm pretty sure it has to do with using the ASCII encoding to get the bytes from the string returned by the data reader, but I'm not sure of another way to get the value into the format that I need, other that to write my own DBF reader and bypass ADO.NET altogether which I don't want to do unless I absolutely have to. Any help would be greatly appreciated. byte[] c0; int i0; string con = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\ASTM;Extended Properties=dBASE III;User ID=Admin;Password=;"; using (OleDbConnection c = new OleDbConnection(con)) { c.Open(); OleDbCommand cmd = c.CreateCommand(); cmd.CommandText = "SELECT * FROM astm2007"; OleDbDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { c0 = Encoding.ASCII.GetBytes(dr.GetValue(0).ToString()); i0 = BitConverter.ToInt16(c0, 0); } dr.Dispose(); }

    Read the article

  • C# Regex stops after first line matched

    - by JD Guzman
    Ok so I have a regex and I need it to find matches in a multiline string. This is the string I am using: Device Identifier: disk0 Device Node: /dev/disk0 Part of Whole: disk0 Device / Media Name: OCZ-VERTEX2 Media Volume Name: Not applicable (no file system) Mounted: Not applicable (no file system) File System: None Content (IOContent): GUID_partition_scheme OS Can Be Installed: No Media Type: Generic Protocol: SATA SMART Status: Verified Total Size: 240.1 GB (240057409536 Bytes) (exactly 468862128 512-Byte-Blocks) Volume Free Space: Not applicable (no file system) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Not applicable (no file system) Ejectable: No Whole: Yes Internal: Yes Solid State: Yes OS 9 Drivers: No Low Level Format: Not supported Basically I need to separate each line into two groups with the colon as the separator. The regex I am using is: @"([A-Za-z0-9\(\) \-\/]+):([A-Za-z0-9\(\) \-\/]+).*" It does work but only picks up the first line and separates it into the two groups like I want but it stops at that point. I have tried the Multiline option but it doesn't make any difference. I must admit I am new to the regex world. Any help is appreciated.

    Read the article

  • Java UTF-8 to ASCII conversion with supplements

    - by bozo
    Hi, we are accepting all sorts of national characters in UTF-8 string on the input, and we need to convert them to ASCII string on the output for some legacy use. (we don't accept Chinese and Japanese chars, only European languages) We have a small utility to get rid of all the diacritics: public static final String toBaseCharacters(final String sText) { if (sText == null || sText.length() == 0) return sText; final char[] chars = sText.toCharArray(); final int iSize = chars.length; final StringBuilder sb = new StringBuilder(iSize); for (int i = 0; i < iSize; i++) { String sLetter = new String(new char[] { chars[i] }); sLetter = Normalizer.normalize(sLetter, Normalizer.Form.NFC); try { byte[] bLetter = sLetter.getBytes("UTF-8"); sb.append((char) bLetter[0]); } catch (UnsupportedEncodingException e) { } } return sb.toString(); } The question is how to replace all the german sharp s (ß, Ð, d) and other characters that get through the above normalization method, with their supplements (in case of ß, supplement would probably be "ss" and in case od Ð supplement would be either "D" or "Dj"). Is there some simple way to do it, without million of .replaceAll() calls? So for example: Ðonardan = Djonardan, Blaß = Blass and so on. We can replace all "problematic" chars with empty space, but would like to avoid this to make the output as similar to the input as possible. Thank you for your answers, Bozo

    Read the article

  • Porting an IBXpress Interbase 6 app to the current Firebird platform, on Delphi 7?

    - by robsoft
    Just wondering if there are any gotchas to be wary of here. We have a legacy D7 app that we developed several years ago for a client, which uses IBXpress to talk to the open source Interbase 6 build. We're having a number of issues with that platform these days (very slow to connect/start-up on new hardware being the chief one) and the client has okayed spending some time/money moving the database over to Firebird. We really DON'T want to embark upon moving it to D2010 (or D2007 which would be my preference right now) as we figure that we might have to move the database layer from IBXpress to something else to best suit Firebird anyway. And at the end of the day, the client is only looking to lessen the database pain, not overhaul/upgrade/rewrite the app. Given the ancestry of Firebird, is it a fairly painless, well-understood path from IBXpress Interbase 6 to (whatever) with Firebird? We have quite a number of sprocs, triggers (and even datatypes) etc in the existing IB database already (and the client has a number of paying customers all using this platform) so we felt that going to Firebird was more likely to be a smoother move than moving to SQL Express (or another flavour of DB entirely). Note that we're not looking for 'embedded' DB advocacy - in many of our client's customers' installations, the software is used in a multi-user client-server way so keeping that kind of approach is important.

    Read the article

  • Using RecordStore in Java J2ME

    - by me123
    Hi, I am currently doing some J2ME development. I am having a problem in that a user can add and remove elements to the record store, and if a record gets deleted, then that record is left empty and the others don't move up one. I'm trying to come up with a loop that will check if a record has anything in it (incase it has been deleted) and if it does then I want to add the contents of that record to a list. My code is similar to as follows: for (int i = 1; i <= rs.getNumRecords(); i++) { // Re-allocate if necessary if (rs.getRecordSize(i) > recData.length) recData = new byte[rs.getRecordSize(i)]; len = rs.getRecord(i, recData, 0); st = new String(recData, 0, len); System.out.println("Record #" + i + ": " + new String(recData, 0, len)); System.out.println("------------------------------"); if(st != null) { list.insert(i-1, st, null); } } When it gets to rs.getRecordSize(i), I always get a "javax.microedition.rms.InvalidRecordIDException: error finding record". I know this is due to the record being empty but I can't think of a way to get around this problem. Any help would be much appreciated. Thanks in advance.

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Are Large iPhone Ping Times Indicative of Application Latency?

    - by yar
    I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too): 64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms 64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms 64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms 64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms 64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms 64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms 64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms 64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms 64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms Even if this is double what the iPhone-Host or Host-iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options? Traceroute reports more workable times: traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 192.168.1.3 (192.168.1.3) 4.662 ms 3.182 ms 3.034 ms can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

    Read the article

  • Combining SQL Rows

    - by lumberjack4
    I've got SQL Compact Database that contains a table of IP Packet Headers. The Table looks like this: Table: PacketHeaders ID SrcAddress SrcPort DestAddress DestPort Bytes 1 10.0.25.1 255 10.0.25.50 500 64 2 10.0.25.50 500 10.0.25.1 255 80 3 10.0.25.50 500 10.0.25.1 255 16 4 75.48.0.25 387 74.26.9.40 198 72 5 74.26.9.40 198 75.48.0.25 387 64 6 10.0.25.1 255 10.0.25.50 500 48 I need to perform a query to show 'conversations' going on across a local network. Packets going from A - B is part of the same conversations as packets going from B - A. I need to perform a query to show the on going conversations. Basically what I need is something that looks like this: Returned Query: SrcAddress SrcPort DestAddress DestPort TotalBytes BytesA->B BytesB->A 10.0.25.1 255 10.0.25.50 500 208 112 96 75.48.0.25 387 74.26.9.40 198 136 72 64 As you can see I need the query (or series of queries) to recognize that A-B is the same as B-A and break up the byte counts accordingly. I'm not a SQL guru by any means but any help on this would be greatly appreciated.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >