Search Results

Search found 22427 results on 898 pages for 'opn program'.

Page 841/898 | < Previous Page | 837 838 839 840 841 842 843 844 845 846 847 848  | Next Page >

  • Securing a license key with RSA key.

    - by Jesse Knott
    Hello, it's late, I'm tired, and probably being quite dense.... I have written an application that I need to secure so it will only run on machines that I generate a key for. What I am doing for now is getting the BIOS serial number and generating a hash from that, I then am encrypting it using a XML RSA private key. I then sign the XML to ensure that it is not tampered with. I am trying to package the public key to decrypt and verify the signature with, but every time I try to execute the code as a different user than the one that generated the signature I get a failure on the signature. Most of my code is modified from sample code I have found since I am not as familiar with RSA encryption as I would like to be. Below is the code I was using and the code I thought I needed to use to get this working right... Any feedback would be greatly appreciated as I am quite lost at this point the original code I was working with was this, this code works fine as long as the user launching the program is the same one that signed the document originally... CspParameters cspParams = new CspParameters(); cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; cspParams.Flags = CspProviderFlags.UseMachineKeyStore; // Create a new RSA signing key and save it in the container. RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(cspParams) { PersistKeyInCsp = true, }; This code is what I believe I should be doing but it's failing to verify the signature no matter what I do, regardless if it's the same user or a different one... RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(); //Load the private key from xml file XmlDocument xmlPrivateKey = new XmlDocument(); xmlPrivateKey.Load("KeyPriv.xml"); rsaKey.FromXmlString(xmlPrivateKey.InnerXml); I believe this to have something to do with the key container name (Being a real dumbass here please excuse me) I am quite certain that this is the line that is both causing it to work in the first case and preventing it from working in the second case.... cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; Is there a way for me to sign/encrypt the XML with a private key when the application license is generated and then drop the public key in the app directory and use that to verify/decrypt the code? I can drop the encryption part if I can get the signature part working right. I was using it as a backup to obfuscate the origin of the license code I am keying from. Does any of this make sense? Am I a total dunce? Thanks for any help anyone can give me in this..

    Read the article

  • Declare private static members in F#?

    - by acidzombie24
    I decided to port the class in C# below to F# as an exercise. It was difficult. I only notice three problems 1) Greet is visible 2) I can not get v to be a static class variable 3) I do not know how to set the greet member in the constructor. How do i fix these? The code should be similar enough that i do not need to change any C# source. ATM only Test1.v = 21; does not work C# using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace CsFsTest { class Program { static void Main(string[] args) { Test1.hi("stu"); new Test1().hi(); Test1.v = 21; var a = new Test1("Stan"); a.hi(); a.a = 9; Console.WriteLine("v = {0} {1} {2}", a.a, a.b, a.NotSTATIC()); } } class Test1 { public int a; public int b { get { return a * 2; } } string greet = "User"; public static int v; public Test1() {} public Test1(string name) { greet = name; } public static void hi(string greet) { Console.WriteLine("Hi {0}", greet); } public void hi() { Console.WriteLine("Hi {0} #{1}", greet, v); } public int NotSTATIC() { return v; } } } F# namespace CsFsTest type Test1 = (* public int a; public int b { get { return a * 2; } } string greet = "User"; public static int v; *) [<DefaultValue>] val mutable a : int member x.b = x.a * 2 member x.greet = "User" (*!! Needs to be private *) [<DefaultValue>] val mutable v : int (*!! Needs to be static *) (* public Test1() {} public Test1(string name) { greet = name; } *) new () = {} new (name) = { } (* public static void hi(string greet) { Console.WriteLine("Hi {0}", greet); } public void hi() { Console.WriteLine("Hi {0} #{1}", greet, v); } public int NotSTATIC() { return v; } *) static member hi(greet) = printfn "hi %s" greet member x.hi() = printfn "hi %s #%i" x.greet x.v member x.NotSTATIC() = x.v

    Read the article

  • Datagrid using usercontrol

    - by klawusel
    Hello I am fighting with this problem: I have a usercontrol which contains a textbox and a button (the button calls some functions to set the textbox's text), here is the xaml: <UserControl x:Class="UcSelect" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="Control1Name" <Grid x:Name="grid1" MaxHeight="25"> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition Width="25"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="25"/> </Grid.RowDefinitions> <TextBox x:Name="txSelect" Text="{Binding UcText, Mode=TwoWay}" /> <Button x:Name="pbSelect" Background="Red" Grid.Column="1" Click="pbSelect_Click">...</Button> </Grid> And here the code behind: Partial Public Class UcSelect Private Shared Sub textChangedCallBack(ByVal [property] As DependencyObject, ByVal args As DependencyPropertyChangedEventArgs) Dim UcSelectBox As UcSelect = DirectCast([property], UcSelect) End Sub Public Property UcText() As String Get Return GetValue(UcTextProperty) End Get Set(ByVal value As String) SetValue(UcTextProperty, value) End Set End Property Public Shared ReadOnly UcTextProperty As DependencyProperty = _ DependencyProperty.Register("UcText", _ GetType(String), GetType(UcSelect), _ New FrameworkPropertyMetadata(String.Empty, FrameworkPropertyMetadataOptions.BindsTwoWayByDefault, New PropertyChangedCallback(AddressOf textChangedCallBack))) Public Sub New() InitializeComponent() grid1.DataContext = Me End Sub Private Sub pbSelect_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) 'just demo UcText = UcText + "!" End Sub End Class The UserControl works fine when used as a single control in this way: <local:UcSelect Grid.Row="1" x:Name="ucSingle1" UcText="{Binding FirstName, Mode=TwoWay}"/> Now I wanted to use the control in a custom datagrid column. As I like to have binding support I choosed to derive from DataGridtextColumn instead of using a DataGridTemplateColumn, here is the derived column class: Public Class DerivedColumn Inherits DataGridTextColumn Protected Overloads Overrides Function GenerateElement(ByVal oCell As DataGridCell, ByVal oDataItem As Object) As FrameworkElement Dim oElement = MyBase.GenerateElement(oCell, oDataItem) Return oElement End Function Protected Overloads Overrides Function GenerateEditingElement(ByVal oCell As DataGridCell, ByVal oDataItem As Object) As FrameworkElement Dim oUc As New UcSelect Dim oBinding As Binding = CType(Me.Binding, Binding) oUc.SetBinding(UcSelect.UcTextProperty, oBinding) Return oUc End Function End Class The column is used in xaml in the following way: <local:DerivedColumn Header="Usercontrol" Binding="{Binding FirstName, Mode=TwoWay}"></local:DerivedColumn> If I start my program all seems to be fine, but changes I make in the custom column are not reflected in the object (property "FirstName"), the changes are simply rolled back when leaving the cell. I think there must be something wrong with my GenerateEditingElement code, but have no idea ... Any Help would really be appreciated Regards Klaus

    Read the article

  • What Causes Boost Asio to Crash Like This?

    - by Scott Lawson
    My program appears to run just fine most of the time, but occasionally I get a segmentation fault. boost version = 1.41.0 running on RHEL 4 compiled with GCC 3.4.6 Backtrace: #0 0x08138546 in boost::asio::detail::posix_fd_set_adapter::is_set (this=0xb74ed020, descriptor=-1) at /home/scottl/boost_1_41_0/boost/asio/detail/posix_fd_set_adapter.hpp:57 __result = -1 'ÿ' #1 0x0813e1b0 in boost::asio::detail::reactor_op_queue::perform_operations_for_descriptors (this=0x97f3b6c, descriptors=@0xb74ed020, result=@0xb74ecec8) at /home/scottl/boost_1_41_0/boost/asio/detail/reactor_op_queue.hpp:204 op_iter = {_M_node = 0xb4169aa0} i = {_M_node = 0x97f3b74} #2 0x081382ca in boost::asio::detail::select_reactor::run (this=0x97f3b08, block=true) at /home/scottl/boost_1_41_0/boost/asio/detail/select_reactor.hpp:388 read_fds = {fd_set_ = {fds_bits = {16, 0 }}, max_descriptor_ = 65} write_fds = {fd_set_ = {fds_bits = {0 }}, max_descriptor_ = -1} retval = 1 lock = { = {}, mutex_ = @0x97f3b1c, locked_ = true} except_fds = {fd_set_ = {fds_bits = {0 }}, max_descriptor_ = -1} max_fd = 65 tv_buf = {tv_sec = 0, tv_usec = 710000} tv = (timeval *) 0xb74ecf88 ec = {m_val = 0, m_cat = 0x81f2c24} sb = { = {}, blocked_ = true, old_mask_ = {__val = {0, 0, 134590223, 3075395548, 3075395548, 3075395464, 134729792, 3075395360, 135890240, 3075395368, 134593920, 3075395544, 135890240, 3075395384, 134599542, 3020998404, 135890240, 3075395400, 134614095, 3075395544, 4, 3075395416, 134548135, 3021172996, 4294967295, 3075395432, 134692921, 3075395504, 0, 3075395448, 134548107, 3021172992}}} #3 0x0812eb45 in boost::asio::detail::task_io_service ::do_one (this=0x97f3a70, lock=@0xb74ed230, this_idle_thread=0xb74ed240, ec=@0xb74ed2c0) at /home/scottl/boost_1_41_0/boost/asio/detail/task_io_service.hpp:260 more_handlers = false c = {lock_ = @0xb74ed230, task_io_service_ = @0x97f3a70} h = (boost::asio::detail::handler_queue::handler *) 0x97f3aa0 polling = false task_has_run = true #4 0x0812765f in boost::asio::detail::task_io_service ::run (this=0x97f3a70, ec=@0xb74ed2c0) at /home/scottl/boost_1_41_0/boost/asio/detail/task_io_service.hpp:103 ctx = { = {}, owner_ = 0x97f3a70, next_ = 0x0} this_idle_thread = {wakeup_event = { = {}, cond_ = {__c_lock = { __status = 0, __spinlock = 22446}, __c_waiting = 0x2bd7, __padding = "\000\000\000\000×+\000\000\000\000\000\000×+\000\000\000\000\000\000\204:\177\t\000\000\000", __align = 0}, signalled_ = true}, next = 0x0} lock = { = {}, mutex_ = @0x97f3a84, locked_ = false} n = 11420 #5 0x08125e99 in boost::asio::io_service::run (this=0x97ebbcc) at /home/scottl/boost_1_41_0/boost/asio/impl/io_service.ipp:58 ec = {m_val = 0, m_cat = 0x81f2c24} s = 8 #6 0x08154424 in boost::_mfi::mf0::operator() (this=0x9800870, p=0x97ebbcc) at /home/scottl/boost_1_41_0/boost/bind/mem_fn_template.hpp:49 No locals. #7 0x08154331 in boost::_bi::list1 ::operator(), boost::_bi::list0 (this=0x9800878, f=@0x9800870, a=@0xb74ed337) at /home/scottl/boost_1_41_0/boost/bind/bind.hpp:236 No locals. #8 0x081541e5 in boost::_bi::bind_t, boost::_bi::list1 ::operator() (this=0x9800870) at /home/scottl/boost_1_41_0/boost/bind/bind_template.hpp:20 a = {} #9 0x08154075 in boost::detail::thread_data, boost::_bi::list1 ::run (this=0x98007a0) at /home/scottl/boost_1_41_0/boost/thread/detail/thread.hpp:56 No locals. #10 0x0816fefd in thread_proxy () at /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/bits/locale_facets.tcc:2443 __ioinit = {static _S_refcount = , static _S_synced_with_stdio = } ---Type to continue, or q to quit--- typeinfo for common::RuntimeException = {} typeinfo name for common::RuntimeException = "N6common16RuntimeExceptionE" #11 0x00af23cc in start_thread () from /lib/tls/libpthread.so.0 No symbol table info available. #12 0x00a5c96e in __init_misc () from /lib/tls/libc.so.6 No symbol table info available.

    Read the article

  • "EXC_BAD_ACCESS: Unable to restore previously selected frame" Error, Array size?

    - by Job
    Hi there, I have an algorithm for creating the sieve of Eratosthenes and pulling primes from it. It lets you enter a max value for the sieve and the algorithm gives you the primes below that value and stores these in a c-style array. Problem: Everything works fine with values up to 500.000, however when I enter a large value -while running- it gives me the following error message in xcode: Program received signal: “EXC_BAD_ACCESS”. warning: Unable to restore previously selected frame. Data Formatters temporarily unavailable, will re-try after a 'continue'. (Not safe to call dlopen at this time.) My first idea was that I didn't use large enough variables, but as I am using 'unsigned long long int', this should not be the problem. Also the debugger points me to a point in my code where a point in the array get assigned a value. Therefore I wonder is there a maximum limit to an array? If yes: should I use NSArray instead? If no, then what is causing this error based on this information? EDIT: This is what the code looks like (it's not complete, for it fails at the last line posted). I'm using garbage collection. /*--------------------------SET UP--------------------------*/ unsigned long long int upperLimit = 550000; // unsigned long long int sieve[upperLimit]; unsigned long long int primes[upperLimit]; unsigned long long int indexCEX; unsigned long long int primesCounter = 0; // Fill sieve with 2 to upperLimit for(unsigned long long int indexA = 0; indexA < upperLimit-1; ++indexA) { sieve[indexA] = indexA+2; } unsigned long long int prime = 2; /*-------------------------CHECK & FIND----------------------------*/ while(!((prime*prime) > upperLimit)) { //check off all multiples of prime for(unsigned long long int indexB = prime-2; indexB < upperLimit-1; ++indexB) { // Multiple of prime = 0 if(sieve[indexB] != 0) { if(sieve[indexB] % prime == 0) { sieve[indexB] = 0; } } } /*---------------- Search for next prime ---------------*/ // index of current prime + 1 unsigned long long int indexC = prime - 1; while(sieve[indexC] == 0) { ++indexC; } prime = sieve[indexC]; // Store prime in primes[] primes[primesCounter] = prime; // This is where the code fails if upperLimit > 500000 ++primesCounter; indexCEX = indexC + 1; } As you may or may not see, is that I am -very much- a beginner. Any other suggestions are welcome of course :)

    Read the article

  • Is it possible to change HANDLE that has been opened for synchronous I/O to be opened for asynchrono

    - by Martin Dobšík
    Dear all, Most of my daily programming work in Windows is nowadays around I/O operations of all kind (pipes, consoles, files, sockets, ...). I am well aware of different methods of reading and writing from/to different kinds of handles (Synchronous, asynchronous waiting for completion on events, waiting on file HANDLEs, I/O completion ports, and alertable I/O). We use many of those. For some of our applications it would be very useful to have only one way to treat all handles. I mean, the program may not know, what kind of handle it has received and we would like to use, let's say, I/O completion ports for all. So first I would ask: Let's assume I have a handle: HANDLE h; which has been received by my process for I/O from somewhere. Is there any easy and reliable way to find out what flags it has been created with? The main flag in question is FILE_FLAG_OVERLAPPED. The only way known to me so far, is to try to register such handle into I/O completion port (using CreateIoCompletionPort()). If that succeeds the handle has been created with FILE_FLAG_OVERLAPPED. But then only I/O completion port must be used, as the handle can not be unregistered from it without closing the HANDLE h itself. Providing there is an easy a way to determine presence of FILE_FLAG_OVERLAPPED, there would come my second question: Is there any way how to add such flag to already existing handle? That would make a handle that has been originally open for synchronous operations to be open for asynchronous. Would there be a way how to create opposite (remove the FILE_FLAG_OVERLAPPED to create synchronous handle from asynchronous)? I have not found any direct way after reading through MSDN and googling a lot. Would there be at least some trick that could do the same? Like re-creating the handle in same way using CreateFile() function or something similar? Something even partly documented or not documented at all? The main place where I would need this, is to determine the the way (or change the way) process should read/write from handles sent to it by third party applications. We can not control how third party products create their handles. Dear Windows gurus: help please! With regards Martin

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • "RFC 2833 RTP Event" Consecutive Events and the E "End" Bit

    - by brian_d
    Hello, I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt When I do set the E "End" bit, but leave it as 0, I get the following behaviour: If for example keys 7874556332111111145855885#3 were pressed, then ALL events would be sent and show up in a program like wireshark, however only 87456321458585#3 would sound. So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound. In section 3.9, figure 2 of the above linked document, they give a 911 example. Here all but the last event have the E bit set. When I set the bit for all numbers, I never get an event to sound. I have thought of a couple possible thing but do not know if they are the reason: 1) figure 2 shows payload types of 96 and 97 sent. I have not nor know how to exactly. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively" 2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this? I have also fiddled around with timestamp intervals and the RTP marker. Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas: 6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end) Real-Time Transport Protocol Stream setup by SDP (frame 6225) Setup frame: 6225 Setup Method: SDP 10.. .... = Version: RFC 1889 Version (2) ..0. .... = Padding: False ...0 .... = Extension: False .... 0000 = Contributing source identifiers count: 0 0... .... = Marker: False Payload type: telephone-event (101) Sequence number: 0 Extended sequence number: 65536 Timestamp: 0 Synchronization Source identifier: 0x15f27104 (368210180) RFC 2833 RTP Event Event ID: DTMF Pound # (11) 1... .... = End of Event: True .0.. .... = Reserved: False ..00 0000 = Volume: 0 Event Duration: 2048

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • emulator crashes

    - by Dave
    I am setting up an Android environment for the first time on Eclipse. I have many years of Eclipse experience, but new to Android. This is being done on an Apple Mac Mini, running MacOSX 10.6.3. I am using the latest Eclipse Classic, version 3.5.2. I am trying to get the tiny hello world program running. When I run it, I get the following in the console window of Eclipse: [2010-06-12 13:48:08 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'Android2.2AVD' [2010-06-12 13:48:08 - HelloAndroid] Launching a new emulator with Virtual Device 'Android2.2AVD' [2010-06-12 13:48:11 - HelloAndroid] New emulator found: emulator-5554 [2010-06-12 13:48:11 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2010-06-12 13:48:12 - Emulator] 2010-06-12 13:48:12.783 emulator[50495:903] Warning once: This application, or a library it uses, is using NSQuickDrawView, which has been deprecated. Apps should cease use of QuickDraw and move to Quartz. [2010-06-12 13:48:19 - HelloAndroid] emulator-5554 disconnected! Cancelling 'com.example.helloandroid.HelloAndroid activity launch'! The emulator crashes with the following info. I have followed all the instructions for running the hello world sample. Anyone have any ideas? Process: emulator [50398] Path: /Users/jeremy/android-sdk-mac_86/tools/emulator Identifier: emulator Version: ??? (???) Code Type: X86 (Native) Parent Process: eclipse [50388] Date/Time: 2010-06-12 13:28:38.595 -0400 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Interval Since Last Report: 363037 sec Crashes Since Last Report: 9 Per-App Crashes Since Last Report: 7 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000000007fd000 Crashed Thread: 4 Thread 0: Dispatch queue: com.apple.main-thread 0 emulator 0x000eed4e helper_set_cp15 + 30 Thread 1: 0 libSystem.B.dylib 0x9020bbd2 __workq_kernreturn + 10 1 libSystem.B.dylib 0x9020c168 _pthread_wqthread + 941 2 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 2: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x9020cb42 kevent + 10 1 libSystem.B.dylib 0x9020d25c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x9020c719 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x9020c4be _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x9020bf41 _pthread_wqthread + 390 5 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 3: 0 libSystem.B.dylib 0x901e635a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x90213ea1 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x90242a28 pthread_cond_timedwait_relative_np + 47 3 com.apple.audio.CoreAudio 0x9056f965 CAGuard::WaitFor(unsigned long long) + 219 4 com.apple.audio.CoreAudio 0x90572997 CAGuard::WaitUntil(unsigned long long) + 289 5 com.apple.audio.CoreAudio 0x90570294 HP_IOThread::WorkLoop() + 1892 6 com.apple.audio.CoreAudio 0x9056fb2b HP_IOThread::ThreadEntry(HP_IOThread*) + 17 7 com.apple.audio.CoreAudio 0x9056fa42 CAPThread::Entry(CAPThread*) + 140 8 libSystem.B.dylib 0x90213a19 _pthread_start + 345 9 libSystem.B.dylib 0x9021389e thread_start + 34 Thread 4 Crashed: 0 emulator 0x00040380 audioInDeviceIOProc + 96 Thread 4 crashed with X86 Thread State (32-bit): eax: 0x00000000 ebx: 0x007fd000 ecx: 0x000001fe edx: 0x0198f3f0 edi: 0x00000200 esi: 0x01119850 ebp: 0x01119800 esp: 0xb020fad0 ss: 0x0000001f efl: 0x00010212 eip: 0x00040380 cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x007fd000

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • java, swing, Gridlayout problem

    - by josh
    I have a panel with GridLayout But when I'm trying to run the program, only the first button out of 100 is shown. Futhermore, the rest appear only when I move the cursor over them. What's wrong with it? Here's the whole class(Life.CELLS=10 and CellButton is a class which extends JButton) public class MainLayout extends JFrame { public MainLayout() { setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(650, 750); setLayout(new FlowLayout()); //setResizable(false); final JPanel gridPanel = new JPanel(new GridLayout(Life.CELLS, Life.CELLS)); for (int i=0; i<Life.CELLS; i++) { for (int j=0; j<Life.CELLS; j++) { CellButton jb = new CellButton(i, j); jb.setPreferredSize(new Dimension(jb.getIcon().getIconHeight(), jb.getIcon().getIconWidth())); buttons[i][j] = jb; grid[i][j] = false; gridPanel.add(jb); } } add(gridPanel); } } This is code of CellButton package classes; import javax.swing.JButton; import javax.swing.ImageIcon; import javax.swing.JFrame; public class CellButton extends JButton { private int x; private int y; boolean alive; ImageIcon icon; boolean next; // icons for grids final ImageIcon dead = new ImageIcon(JFrame.class.getResource("/images/image1.gif")); final ImageIcon live = new ImageIcon(JFrame.class.getResource("/images/image2.gif")); public CellButton(int X, int Y) { super(); x = X; y = Y; alive = false; icon = dead; setIcon(icon); } public int getX() { return x; } public int getY() { return y; } public boolean isAlive() { return alive; } public void relive() { alive = true; icon = live; setIcon(icon); } public void die() { alive = false; icon = dead; setIcon(icon); } public void setNext(boolean n) { next = n; } public boolean getNext() { return next; } public ImageIcon getIcon() { return icon; } }

    Read the article

  • Android HttpsURLConnection and JSON for new GCM

    - by Ryan Gray
    I'm overhauling certain parts of my app to use the new GCM service to replace C2DM. I simply want to create the JSON request from a Java program for testing and then read the response. As of right now I can't find ANY formatting issues with my JSON request and the google server always return code 400, which indicates a problem with my JSON. http://developer.android.com/guide/google/gcm/gcm.html#server JSONObject obj = new JSONObject(); obj.put("collapse_key", "collapse key"); JSONObject data = new JSONObject(); data.put("info1", "info_1"); data.put("info2", "info 2"); data.put("info3", "info_3"); obj.put("data", data); JSONArray ids = new JSONArray(); ids.add(REG_ID); obj.put("registration_ids", ids); System.out.println(obj.toJSONString()); I print my request to the eclipse console to check it's formatting byte[] postData = obj.toJSONString().getBytes(); try{ URL url = new URL("https://android.googleapis.com/gcm/send"); HttpsURLConnection.setDefaultHostnameVerifier(new JServerHostnameVerifier()); HttpsURLConnection conn = (HttpsURLConnection) url.openConnection(); conn.setDoOutput(true); conn.setDoInput(true); conn.setUseCaches(false); conn.setRequestMethod("POST"); conn.setRequestProperty("Content-Type", "application/json"); conn.setRequestProperty("Authorization", "key=" + API_KEY); System.out.println(conn.toString()); OutputStream out = conn.getOutputStream(); // exception thrown right here. no InputStream to get InputStream in = conn.getInputStream(); byte[] response = null; out.write(postData); out.close(); in.read(response); JSONParser parser = new JSONParser(); String temp = new String(response); JSONObject temp1 = (JSONObject) parser.parse(temp); System.out.println(temp1.toJSONString()); int responseCode = conn.getResponseCode(); System.out.println(responseCode + ""); } catch(Exception e){ System.out.println("Exception thrown\n"+ e.getMessage()); } } I'm sure my API key is correct as that would result in error 401, so says the google documentation. This is my first time doing JSON but it's easy to understand because of its simplicity. Anyone have any ideas on why I always receive code 400? update: I've tested the google server example classes provided with gcm so the problem MUST be with my code.

    Read the article

  • MBR Booting from DOS

    - by eflukx
    For a project I would like to invoke the MBR on the first harddisk directly from DOS. I've written a small assembler program that loads the MBR in memory at 0:7c00h an does a far jump to it. I've put my util on a bootable floppy. The disk (HD0, 0x80) i'm trying to boot has a TrueCrypt boot loader on it. It shows up the TrueCrypt screen, but after typing in the password it crashes the system. When I run my little utlility (w00t.com) on a normal WinXP machine it seams to crash immedealty. Apparently I'm forgetting some crucial stuff the BIOS normally does, my guess is it's something trivial. Can someone with better bare-metal DOS and BIOS experience help me out? Heres my code: .MODEL tiny .386 _TEXT SEGMENT USE16 INCLUDE BootDefs.i ORG 100h start: ; http://vxheavens.com/lib/vbw05.html ; Before DOS has booted the BIOS stores the amount of usable lower memory ; in a word located at 0:413h in memory. We going to erase this value because ; we have booted dos before loading the bootsector, and dos is fat (and ugly). ; fake free memory ;push ds ;push 0 ;pop ds ;mov ax, TC_BOOT_LOADER_SEGMENT / 1024 * 16 + TC_BOOT_MEMORY_REQUIRED ;mov word ptr ds:[413h], ax ;ax = memory in K ;pop ds ;lea si, memory_patched_msg ;call print ;mov ax, cs mov ax, 0 mov es, ax ; read first sector to es:7c00h (== cs:7c00) mov dl, 80h mov cl, 1 mov al, 1 mov bx, 7c00h ;load sector to es:bx call read_sectors lea si, mbr_loaded_msg call print lea si, jmp_to_mbr_msg call print ;Set BIOS default values in environment cli mov dl, 80h ;(drive C) xor ax, ax mov ds, ax mov es, ax mov ss, ax mov sp, 0ffffh sti push es push 7c00h retf ;Jump to MBR code at 0:7c00h ; Print string print: xor bx, bx mov ah, 0eh cld @@: lodsb test al, al jz print_end int 10h jmp @B print_end: ret ; Read sectors of the first cylinder read_sectors: mov ch, 0 ; Cylinder mov dh, 0 ; Head ; DL = drive number passed from BIOS mov ah, 2 int 13h jnc read_ok lea si, disk_error_msg call print read_ok: ret memory_patched_msg db 'Memory patched', 13, 10, 7, 0 mbr_loaded_msg db 'MBR loaded', 13, 10, 7, 0 jmp_to_mbr_msg db 'Jumping to MBR code', 13, 10, 7, 0 disk_error_msg db 'Disk error', 13, 10, 7, 0 _TEXT ENDS END start

    Read the article

  • Rush Hour - Solving the game

    - by Rubys
    Rush Hour if you're not familiar with it, the game consists of a collection of cars of varying sizes, set either horizontally or vertically, on a NxM grid that has a single exit. Each car can move forward/backward in the directions it's set in, as long as another car is not blocking it. You can never change the direction of a car. There is one special car, usually it's the red one. It's set in the same row that the exit is in, and the objective of the game is to find a series of moves (a move - moving a car N steps back or forward) that will allow the red car to drive out of the maze. I've been trying to think how to solve this problem computationally, and I can really not think of any good solution. I came up with a few: Backtracking. This is pretty simple - Recursion and some more recursion until you find the answer. However, each car can be moved a few different ways, and in each game state a few cars can be moved, and the resulting game tree will be HUGE. Some sort of constraint algorithm that will take into account what needs to be moved, and work recursively somehow. This is a very rough idea, but it is an idea. Graphs? Model the game states as a graph and apply some sort of variation on a coloring algorithm, to resolve dependencies? Again, this is a very rough idea. A friend suggested genetic algorithms. This is sort of possible but not easily. I can't think of a good way to make an evaluation function, and without that we've got nothing. So the question is - How to create a program that takes a grid and the vehicle layout, and outputs a series of steps needed to get the red car out? Sub-issues: Finding some solution. Finding an optimal solution (minimal number of moves) Evaluating how good a current state is Example: How can you move the cars in this setting, so that the red car can "exit" the maze through the exit on the right?

    Read the article

  • Controling virtualbox internet access?

    - by HandyGandy
    I am finally going through the process of moving my XP into a vbox (host linux). The thing is that I am migrating a virtually clean install. So aside from the occasional antivirus scan, I want to make sure that my XP is not sending malware data (keystoke.logs, spam etc. ) out silently ( and thus having picked up some virus ). To that end I want to limit XP to contacting my LAN and a few internet sites. ( mainly sites that require proprietary windows only software to access, AV sites and Windows update ). I want XP to only access preapproved addresses. If it is trying to contact a nonapproved address, I want it somehow logged and access restricted until I allow access. I also don't want to have to decide whether to allow access to a site at my leisure. To keeps things clear let me give an example: I start my vbox/XP ( which I call MYXP) running on my linux box ( called MYLINUX connecting to the net through a linksys wrt54g ) and connects via samba to my LAN ( since my LAN seems to be possessed of every evil thing, it's address is 192.168.666. ). At the moment my configuration is set so that I allow MYXP to access 192.168.666 and www.MYANTIVIRUS_UPDATES.com and www.MS_UPDATES.com. Then on the VM I start a program which tries to make a connection to www.playmygame.com . www.playmygame.com is on my preapproved list so the connection goes through. Later I check attempted accesses and discover that it also tried to connect to www.mygame_high_scores.com I figure this is OK so I add www.mygame_high_scores.com to my approved list. Later, I again check address and discover that my VM/XP tried to access www.mygame_steals_your_identity.com. I do some checking and discover the address is registered to someone in Kiev, Nigeria. Since this doesn't sound kosher to me, I replace the MYXP VM with one that was backed up before I installed mygame. I remove www.playmygame.com and www.mygame_high_scores.com from my access list for MYXP. It should acomplish this with little overheard. When I am not running the VM ideally it should not have any overhead. Suggestions?

    Read the article

  • Consuming Web Services requiring Authentication from behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about Proxy Authentication, but none that seams to address this problem. I'm building a SharePoint Web Service consuming desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. The normal -dhttp.proxyHost ... settings does not seam to help any. But I have found that by creating a ProxySelector class and setting it as default, I can regain access to the authentication web service, but I still can't retrieve the list of web sites from the SharePoint server. It's almost as if the authentication I provide is going to the proxy rather than the SharePoint server. Anyone have any experience on how to make this work? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text When running the code from a network behind a proxy server, I successfully retrieve the Authentication mode from the server, but the request for the Web Site list generates an exception originating at: com.sun.xml.internal.ws.transport.http.client .HttpClientTransport.readResponseCodeAndMessage(HttpClientTransport.java:201) The output from the source when no proxy is on the network is listed below: Successfully retrieved the SharePoint WebService response for Authentication SharePoint authentication method is: WINDOWS Calling Web Service to retrieve list of web site. Web Service call response: -------------- XML START -------------- <Webs xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Web Title="Collaboration Lab" Url="http://host.domain.com/collaboration"/> <Web Title="Global Data Lists" Url="http://host.domain.com/global_data_lists"/> <Web Title="Landing" Url="http://host.domain.com/Landing"/> <Web Title="SharePoint HelpDesk" Url="http://host.domain.com/helpdesk"/> <Web Title="Program Management" Url="http://host.domain.com/programmanagement"/> <Web Title="Project Site" Url="http://host.domain.com/Project Site"/> <Web Title="SharePoint Administration Tools" Url="http://host.domain.com/admin"/> <Web Title="Space Management Project" Url="http://host.domain.com/spacemgmt"/> </Webs> -------------- XML END -------------- Br Jan

    Read the article

  • XML Parsing Error - C#

    - by Indebi
    My code is having an XML parsing error at line 7 position 32 and I'm not really sure why Exact Error Dump 5/1/2010 10:21:42 AM System.Xml.XmlException: An error occurred while parsing EntityName. Line 7, position 32. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, Int32 lineNo, Int32 linePos) at System.Xml.XmlTextReaderImpl.HandleEntityReference(Boolean isInAttributeValue, EntityExpandType expandType, Int32& charRefEndPos) at System.Xml.XmlTextReaderImpl.ParseAttributeValueSlow(Int32 curPos, Char quoteChar, NodeData attr) at System.Xml.XmlTextReaderImpl.ParseAttributes() at System.Xml.XmlTextReaderImpl.ParseElement() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace) at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc) at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) at System.Xml.XmlDocument.Load(XmlReader reader) at System.Xml.XmlDocument.Load(String filename) at Lookoa.LoadingPackages.LoadingPackages_Load(Object sender, EventArgs e) in C:\Users\Administrator\Documents\Visual Studio 2010\Projects\Lookoa\Lookoa\LoadingPackages.cs:line 30 Xml File, please note this is just a sample because I want the program to work before I begin to fill this repository <repo> <Packages> <TheFirstPackage id="00001" longname="Mozilla Firefox" appver="3.6.3" pkgver="0.01" description="Mozilla Firefox is a free and open source web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. A Net Applications statistic put Firefox at 24.52% of the recorded usage share of web browsers as of March 2010[update], making it the second most popular browser in terms of current use worldwide after Microsoft's Internet Explorer." cat="WWW" rlsdate="4/8/10" pkgloc="http://google.com"/> </Packages> <categories> <WWW longname="World Wide Web" description="Software that's focus is communication or primarily uses the web for any particular reason."> </WWW> <Fun longname="Entertainment & Other" description="Music Players, Video Players, Games, or anything that doesn't fit in any of the other categories."> </Fun> <Work longname="Productivity" description="Application's commonly used for occupational needs or, stuff you work on"> </Work> <Advanced longname="System & Security" description="Applications that protect the computer from malware, clean the computer, and other utilities."> </Advanced> </categories> </repo> Small part of C# Code //Loading the Package and Category lists //The info from them is gonna populate the listboxes for Category and Packages Repository.Load("repo.info"); XmlNodeList Categories = Repository.GetElementsByTagName("categories"); foreach (XmlNode Category in Categories) { CategoryNumber++; CategoryNames[CategoryNumber] = Category.Name; MessageBox.Show(CategoryNames[CategoryNumber]); } The Messagebox.Show() is just to make sure it's getting the correct results

    Read the article

  • WCF Self Host Service - Endpoints in C#

    - by Kyle
    My first few attempts at creating a self hosted service. Trying to make something up which will accept a query string and return some text but have have a few issues: All the documentation talks about endpoints being created automatically for each base address if they are not found in a config file. This doesn't seem to be the case for me, I get the "Service has zero application endpoints..." exception. Manually specifying a base endpoint as below seems to resolve this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.ServiceModel.Description; namespace TestService { [ServiceContract] public interface IHelloWorldService { [OperationContract] string SayHello(string name); } public class HelloWorldService : IHelloWorldService { public string SayHello(string name) { return string.Format("Hello, {0}", name); } } class Program { static void Main(string[] args) { string baseaddr = "http://localhost:8080/HelloWorldService/"; Uri baseAddress = new Uri(baseaddr); // Create the ServiceHost. using (ServiceHost host = new ServiceHost(typeof(HelloWorldService), baseAddress)) { // Enable metadata publishing. ServiceMetadataBehavior smb = new ServiceMetadataBehavior(); smb.HttpGetEnabled = true; smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15; host.Description.Behaviors.Add(smb); host.AddServiceEndpoint(typeof(IHelloWorldService), new BasicHttpBinding(), baseaddr + "SayHello"); //for some reason a default endpoint does not get created here host.Open(); Console.WriteLine("The service is ready at {0}", baseAddress); Console.WriteLine("Press to stop the service."); Console.ReadLine(); // Close the ServiceHost. host.Close(); } } } } I still think I'm doing something wrong as I don't get the normal "This is a web service...etc..." page when I load up the url How would I go about setting this up to return the value of name in SayHello(string name) when requested thusly: localhost:8080/HelloWorldService/SayHello?name=kyle Do I have to create an endpoing for the SayHello contract as well? I'm trying to walk before running, but this just seems like crawling...Service has zero application endpoints...

    Read the article

  • .NET OCRing an Image

    - by Kirschstein
    I'm trying to use MODI to OCR a window's program. It works fine for screenshots I grab programmatically using win32 interop like this: public string SaveScreenShotToFile() { RECT rc; GetWindowRect(_hWnd, out rc); int width = rc.right - rc.left; int height = rc.bottom - rc.top; Bitmap bmp = new Bitmap(width, height); Graphics gfxBmp = Graphics.FromImage(bmp); IntPtr hdcBitmap = gfxBmp.GetHdc(); PrintWindow(_hWnd, hdcBitmap, 0); gfxBmp.ReleaseHdc(hdcBitmap); gfxBmp.Dispose(); string fileName = @"c:\temp\screenshots\" + Guid.NewGuid().ToString() + ".bmp"; bmp.Save(fileName); return fileName; } This image is then saved to a file and ran through MODI like this: private string GetTextFromImage(string fileName) { MODI.Document doc = new MODI.DocumentClass(); doc.Create(fileName); doc.OCR(MODI.MiLANGUAGES.miLANG_ENGLISH, true, true); MODI.Image img = (MODI.Image)doc.Images[0]; MODI.Layout layout = img.Layout; StringBuilder sb = new StringBuilder(); for (int i = 0; i < layout.Words.Count; i++) { MODI.Word word = (MODI.Word)layout.Words[i]; sb.Append(word.Text); sb.Append(" "); } if (sb.Length > 1) sb.Length--; return sb.ToString(); } This part works fine, however, I don't want to OCR the entire screenshot, just portions of it. I try cropping the image programmatically like this: private string SaveToCroppedImage(Bitmap original) { Bitmap result = original.Clone(new Rectangle(0, 0, 250, 250), original.PixelFormat); var fileName = "c:\\" + Guid.NewGuid().ToString() + ".bmp"; result.Save(fileName, original.RawFormat); return fileName; } and then OCRing this smaller image, however MODI throws an exception; 'OCR running error', the error code is -959967087. Why can MODI handle the original bitmap but not the smaller version taken from it?

    Read the article

  • How to commit into TortoiseSVN using cruise control config file

    - by pratap
    hi all, can any one tell how to commit into tortoisesvn using cruise control config file. I am getting an error "C:***\Documentation\trunk\dotnet\svn" is not executable or it may not exist. here's the config part... <workingDirectory>C:\*****\Documentation\trunk\dotnet\</workingDirectory> <category>Individual Solutions</category> <modificationDelaySeconds>10</modificationDelaySeconds> <sourcecontrol type="svn"> <trunkUrl>******* svn url *********</trunkUrl> <username> unname </username> <password> pwd </password> <autoGetSource>true</autoGetSource> </sourcecontrol> <tasks> <exec> <executable>C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\MSBuild.exe</executable> <buildTimeoutSeconds>1200</buildTimeoutSeconds> <successExitCodes>0</successExitCodes> </exec> <exec> <executable>iisreset</executable> <buildArgs>/stop</buildArgs> </exec> <exec> <executable>c:\Program Files\TortoiseSVN\bin\TortoiseProc.exe /command:commit /path:"C:\*****\Documentation\trunk\dotnet\"</executable> <buildTimeoutSeconds>1200</buildTimeoutSeconds> <successExitCodes>0</successExitCodes> <description>checkin shared content...</description> </exec> <exec> <executable>iisreset</executable> <buildArgs>/start</buildArgs> </exec> </tasks> </project> Thank you all,

    Read the article

  • Floating point vs integer calculations on modern hardware

    - by maxpenguin
    I am doing some performance critical work in C++, and we are currently using integer calculations for problems that are inherently floating point because "its faster". This causes a whole lot of annoying problems and adds a lot of annoying code. Now, I remember reading about how floating point calculations were so slow approximately circa the 386 days, where I believe (IIRC) that there was an optional co-proccessor. But surely nowadays with exponentially more complex and powerful CPUs it makes no difference in "speed" if doing floating point or integer calculation? Especially since the actual calculation time is tiny compared to something like causing a pipeline stall or fetching something from main memory? I know the correct answer is to benchmark on the target hardware, what would be a good way to test this? I wrote two tiny C++ programs and compared their run time with "time" on Linux, but the actual run time is too variable (doesn't help I am running on a virtual server). Short of spending my entire day running hundreds of benchmarks, making graphs etc. is there something I can do to get a reasonable test of the relative speed? Any ideas or thoughts? Am I completely wrong? The programs I used as follows, they are not identical by any means: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { int accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += rand( ) % 365; } std::cout << accum << std::endl; return 0; } Program 2: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { float accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += (float)( rand( ) % 365 ); } std::cout << accum << std::endl; return 0; } Thanks in advance!

    Read the article

  • WPF / Silverlight Binding when setting DataTemplate programically

    - by Daniel
    I have my little designer tool (my program). On the left side I have TreeView and on the right site I have Accordion. When I select a node I want to dynamically build Accordion Items based on Properties from DataContext of selected node. Selecting nodes works fine, and when I use this sample code for testing it works also. XAML code: <layoutToolkit:Accordion x:Name="accPanel" SelectionMode="ZeroOrMore" SelectionSequence="Simultaneous"> <layoutToolkit:AccordionItem Header="Controller Info"> <StackPanel Orientation="Horizontal" DataContext="{Binding}"> <TextBlock Text="Content:" /> <TextBlock Text="{Binding Path=Name}" /> </StackPanel> </layoutToolkit:AccordionItem> </layoutToolkit:Accordion> C# code: private void treeSceneNode_SelectedItemChanged(object sender, RoutedPropertyChangedEventArgs<object> e) { if (e.NewValue != e.OldValue) { if (e.NewValue is SceneNode) { accPanel.DataContext = e.NewValue; //e.NewValue is a class that contains Name property } } } But the problem occurs when I'm trying to achive this using DateTemplate and dynamically build AccordingItem, the Binding is not working: <layoutToolkit:Accordion x:Name="accPanel" SelectionMode="ZeroOrMore" SelectionSequence="Simultaneous" /> and DataTemplate in my ResourceDictionary <DataTemplate x:Key="dtSceneNodeContent"> <StackPanel Orientation="Horizontal" DataContext="{Binding}"> <TextBlock Text="Content:" /> <TextBlock Text="{Binding Path=Name}" /> </StackPanel> </DataTemplate> and C# code: private void treeSceneNode_SelectedItemChanged(object sender, RoutedPropertyChangedEventArgs<object> e) { if (e.NewValue != e.OldValue) { ResourceDictionary rd = new ResourceDictionary(); rd.Source = new Uri("/SilverGL.GUI;component/SilverGLDesignerResourceDictionary.xaml", UriKind.RelativeOrAbsolute); if (e.NewValue is SceneNode) { accPanel.DataContext = e.NewValue; AccordionItem accController = new AccordionItem(); accController.Header = "Controller Info"; accController.ContentTemplate = rd["dtSceneNodeContent"] as DataTemplate; accPanel.Items.Add(accController); } else { // Other type of node } } } I really need help with this issue. Thanks for any support.

    Read the article

  • Why is Perl's Math::Complex taking up so much time when I try acos(1)?

    - by synapz
    While trying to profile my Perl program, I find that Math::Complex is taking up a lot of time with what looks like some kind of warning. Also, my code shouldn't have any complex numbers being generated or used, so I am not sure what it is doing in Math::Complex, anyway. Here's the FastProf output for the most expensive lines: /usr/lib/perl5/5.8.8/Math/Complex.pm:182 1.55480 276232: _cannot_make("real part", $re) unless $re =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:310 1.01132 453641: sub cartesian {$_[0]->{c_dirty} ? /usr/lib/perl5/5.8.8/Math/Complex.pm:315 0.97497 562188: sub set_cartesian { $_[0]->{p_dirty}++; $_[0]->{c_dirty} = 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:189 0.86302 276232: return $self; /usr/lib/perl5/5.8.8/Math/Complex.pm:1332 0.85628 293660: $self->{display_format} = { %display_format }; /usr/lib/perl5/5.8.8/Math/Complex.pm:185 0.81529 276232: _cannot_make("imaginary part", $im) unless $im =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:1316 0.78749 293660: my %display_format = %DISPLAY_FORMAT; /usr/lib/perl5/5.8.8/Math/Complex.pm:1335 0.69534 293660: %{$self->{display_format}} : /usr/lib/perl5/5.8.8/Math/Complex.pm:186 0.66697 276232: $self->set_cartesian([$re, $im ]); /usr/lib/perl5/5.8.8/Math/Complex.pm:170 0.62790 276232: my $self = bless {}, shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:172 0.56733 276232: if (@_ == 0) { /usr/lib/perl5/5.8.8/Math/Complex.pm:316 0.53179 281094: $_[0]->{'cartesian'} = $_[1] } /usr/lib/perl5/5.8.8/Math/Complex.pm:1324 0.48768 293660: if (@_ == 1) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1319 0.44835 293660: if (exists $self->{display_format}) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1318 0.40355 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:187 0.39950 276232: $self->display_format('cartesian'); /usr/lib/perl5/5.8.8/Math/Complex.pm:1315 0.39312 293660: my $self = shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:1331 0.38087 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:184 0.35171 276232: $im ||= 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:181 0.34145 276232: if (defined $re) { /usr/lib/perl5/5.8.8/Math/Complex.pm:171 0.33492 276232: my ($re, $im); /usr/lib/perl5/5.8.8/Math/Complex.pm:390 0.20658 128280: my ($z1, $z2, $regular) = @_; /usr/lib/perl5/5.8.8/Math/Complex.pm:391 0.20631 128280: if ($z1->{p_dirty} == 0 and ref $z2 and $z2->{p_dirty} == 0) { Thanks for any help!

    Read the article

  • Register all GUI components as Observers or pass current object to next object as a constructor argu

    - by Jack
    First, I'd like to say that I think this is a common issue and there may be a simple or common solution that I am unaware of. Many have probably encountered a similar problem. Thanks for reading. I am creating a GUI where each component needs to communicate (or at least be updated) by multiple other components. Currently, I'm using a Singleton class to accomplish this goal. Each GUI component gets the instance of the singleton and registers itself. When updates need to be made, the singleton can call public methods in the registered class. I think this is similar to an Observer pattern, but the singleton has more control. Currently, the program is set up something like this: class c1 { CommClass cc; c1() { cc = CommClass.getCommClass(); cc.registerC1( this ); C2 c2 = new c2(); } } class c2 { CommClass cc; c2() { cc = CommClass.getCommClass(); cc.registerC2( this ); C3 c3 = new c3(); } } class c3 { CommClass cc; c3() { cc = CommClass.getCommClass(); cc.registerC3( this ); C4 c4 = new c4(); } } etc. Unfortunately, the singleton class keeps growing larger as more communication is required between the components. I was wondering if it's a good idea to instead of using this singleton, pass the higher order GUI components as arguments in the constructors of each GUI component: class c1 { c1() { C2 c2 = new c2( this ); } } class c2 { C1 c1; c2( C1 c1 ) { this.c1 = c1 C3 c3 = new c3( c1, this ); } } class c3 { C1 c1; C2 c2; c3( C1 c1, C2 c2 ) { this.c1 = c1; this.c2 = c2; C4 c4 = new c4( c1, c2, this ); } } etc. The second version relies less on the CommClass, but it's still very messy as the private member variables increase in number and the constructors grow in length. Each class contains GUI components that need to communicate through CommClass, but I can't think of a good way to do it. If this seems strange or horribly inefficient, please describe some method of communication between classes that will continue to work as the project grows. Also, if this doesn't make any sense to anyone, I'll try to give actual code snippets in the future and think of a better way to ask the question. Thanks.

    Read the article

< Previous Page | 837 838 839 840 841 842 843 844 845 846 847 848  | Next Page >