Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 320/555 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Migrating from VisualSVN on windows to linux based svn

    - by Jonathan
    I'd like to migrate my svn repository from my local computer running windows and VisualSVN 2.1.2 to an svn app on webfaction (my Linux hosting solution). Initially I tried dumping the svn: svnadmin dump *path_to_repository* *dumpfile_name* and loading it on the Linux machine svnadmin load *dumpfile_name* I received the following error: svnadmin: Can't open file '*dumpfile_path_and_name*/format': Not a directory I found that on my Windows machine I do have a format folder under the repository. So I copied the entire repository to the Linux machine and tried: svnadmin load *path_to_repository_copy* I received the following error: svnadmin: Expected FS format between '1' and '3'; found format '4' what should I do?

    Read the article

  • va_list crash bug from hell

    - by Jonas Byström
    I'm getting crashes due to memory overwrites (possibly within STLport) running on Darwin. Totally out of ideas, what to do? (Added this question so no-one else would run into the same trap without finding a viable answer.)

    Read the article

  • Getting Cabal to work with GHC 6.12.1

    - by Dan Dyer
    I've installed the latest GHC package (6.12.1) on OS X, but I can't get Cabal to work. I've removed the version I had previously that worked with GHC 6.10 and tried to re-install from scratch. The latest Cabal version available for download is 1.6.0.2. However, when I try to build this I get the following error: Configuring Cabal-1.6.0.2... Setup: failed to parse output of 'ghc-pkg dump' From what I've found searching, this seems to suggest that the version of Cabal is too old for the version of GHC. Is there any way to get Cabal to work with GHC 6.12.1 yet? EDIT: To be clear, I'm trying to set-up cabal-install.

    Read the article

  • How to use regex to match ASTERISK in awk

    - by Ken Chen
    I'm stil pretty new to regular expression and just started learning to use awk. What I am trying to accomplish is writing a ksh script to read-in lines from text, and and for every lines that match the following: *RECORD 0000001 [some_serial_#] to replace $2 (i.e. 000001) with a different number. So essentially the script read in batch record dump, and replace the record number with date+record#, and write to separate file. So this is what I'm thinking the format should be: awk 'match($0,"/*FTR")!=0{$2="$DATE-n++"; print $0} match($0,"/*FTR")==0{print $0}' $BATCH > $OUTPUT but obviously "/*FTR" is not going to work, and I'm not sure if changing $2 and then write the whole line is the correct way to do this. So I am in need of some serious enlightenment.

    Read the article

  • Migrating Ruby Site from EngineYard to Heroku

    - by user410925
    As part of a larger project I've been tasked with migrating some existing Ruby on Rails sites (built with an old version of refinerycms 0.9.6.34, at least that's the version listed in the Gemfile included with the source). I don't normally work with Ruby so I'm at a bit of a loss. The previous developers simply handed over the latest git dump as well as a db dump. I'm working first with trying to get the site up working locally on an Ubuntu 11.10 local machine before pushing up to at test Heroku install. If it's possible to just push directly to Heroku with the files they gave, then I can try that, but it's my understanding I need to get everything working and then use Heroku's tools to deploy. The previous devs said they're using ruby 1.8.7 so in Ubuntu I've done the following: aptitude install ruby1.8 ruby1.8-dev ruby1.8-full aptitude install rubygems1.8 I've restored the database and in the config directory I've made changes to the database.yml to point to the restored database. When I try and run "bundle install" from the root of the extracted source dir I get: Invalid gemspec in [/var/lib/gems/1.8/specifications/mail-2.4.4.gemspec]: invalid date format in specification: "2012-03-14 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/tilt-1.3.3.gemspec]: invalid date format in specification: "2011-08-25 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/mime-types-1.18.gemspec]: invalid date format in specification: "2012-03-21 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/sass-rails-3.2.5.gemspec]: invalid date format in specification: "2012-03-19 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/jquery-rails-2.0.2.gemspec]: invalid date format in specification: "2012-04-03 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/mail-2.4.4.gemspec]: invalid date format in specification: "2012-03-14 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/tilt-1.3.3.gemspec]: invalid date format in specification: "2011-08-25 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/mime-types-1.18.gemspec]: invalid date format in specification: "2012-03-21 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/sass-rails-3.2.5.gemspec]: invalid date format in specification: "2012-03-19 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/jquery-rails-2.0.2.gemspec]: invalid date format in specification: "2012-04-03 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/mail-2.4.4.gemspec]: invalid date format in specification: "2012-03-14 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/tilt-1.3.3.gemspec]: invalid date format in specification: "2011-08-25 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/mime-types-1.18.gemspec]: invalid date format in specification: "2012-03-21 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/sass-rails-3.2.5.gemspec]: invalid date format in specification: "2012-03-19 00:00:00.000000000Z" Invalid gemspec in [/var/lib/gems/1.8/specifications/jquery-rails-2.0.2.gemspec]: invalid date format in specification: "2012-04-03 00:00:00.000000000Z" Fetching gem metadata from https://rubygems.org/....... Fetching gem metadata from https://rubygems.org/.. Using rake (0.9.2.2) Using i18n (0.6.0) Using multi_json (1.3.6) Using activesupport (3.2.3) Using builder (3.0.0) Using activemodel (3.2.3) Using erubis (2.7.0) Using journey (1.0.3) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.1) Using hike (1.2.1) Installing tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.3) Installing mime-types (1.18) Using polyglot (0.3.3) Using treetop (1.4.10) Installing mail (2.4.4) Using actionmailer (3.2.3) Using arel (3.0.2) Using tzinfo (0.3.33) Using activerecord (3.2.3) Using activeresource (3.2.3) Using acts_as_indexed (0.7.8) Using awesome_nested_set (2.1.3) Using babosa (0.3.7) Using bcrypt-ruby (3.0.1) Using coffee-script-source (1.3.3) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.7.3) Using rdoc (3.12) Using thor (0.14.6) Using railties (3.2.3) Using coffee-rails (3.2.2) Using orm_adapter (0.0.7) Using warden (1.1.1) Using devise (2.0.4) Using dragonfly (0.9.12) Using friendly_id (4.0.6) Using paper_trail (2.6.3) Using globalize3 (0.2.0) Installing jquery-rails (2.0.2) Using bundler (1.1.4) Using rails (3.2.3) Using sass (3.1.19) Installing sass-rails (3.2.5) Using truncate_html (0.5.5) Using uglifier (1.2.4) Using will_paginate (3.0.3) Using refinerycms-core (2.0.4) Using refinerycms-authentication (2.0.4) Using refinerycms-dashboard (2.0.4) Using refinerycms-images (2.0.4) Using seo_meta (1.3.0) Using refinerycms-pages (2.0.4) Using refinerycms-resources (2.0.4) Using refinerycms (2.0.4) Using routing-filter (0.3.1) Using refinerycms-i18n (2.0.0) Using sqlite3 (1.3.6) Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed. Obviously the errors with Invalid gemspec need to be resolved, but the other thing that's troubling to me are the lines: Using refinerycms-core (2.0.4) Using refinerycms-authentication (2.0.4) Using refinerycms-dashboard (2.0.4) Using refinerycms-images (2.0.4) Using seo_meta (1.3.0) Using refinerycms-pages (2.0.4) Using refinerycms-resources (2.0.4) Using refinerycms (2.0.4) Using routing-filter (0.3.1) Using refinerycms-i18n (2.0.0) Since the refinerycms version listed in the Gemfile was 0.9.6.34. When it comes to the Ruby world, I'm a bit lost so any help would be greatly appreciated. Thanks,

    Read the article

  • Catching a python app before it exits

    - by Leopd
    I have a python app which is supposed to be very long-lived, but sometimes the process just disappears and I don't know why. Nothing gets logged when this happens, so I'm at a bit of a loss. Is there some way in code I can hook in to an exit event, or some other way to get some of my code to run just before the process quits? I'd like to log the state of memory structures to better understand what's going on.

    Read the article

  • NFS issue brings down entire vSphere ESX estate

    - by growse
    I experienced an odd issue this morning where an NFS issue appeared to have taken down the majority of my VMs hosted on a small vSphere 5.0 estate. The infrastructure itself is 4x IBM HS21 blades running around 20 VMs. The storage is provided by a single HP X1600 array with attached D2700 chassis running Solaris 11. There's a couple of storage pools on this which are exposed over NFS for the storage of the VM files, and some iSCSI LUNs for things like MSCS shared disks. Normally, this is pretty stable, but I appreciate the lack of resiliancy in having a single X1600 doing all the storage. This morning, in the logs of each ESX host, at around 0521 GMT I saw a lot of entries like this: 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4cf9a8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4dc9e8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4d3fa8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4de0a8 3 [....] 2011-11-30T06:16:07.042Z cpu0:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:17:01.459Z cpu2:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:25:17.887Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000a4c2b28 3 2011-11-30T06:27:16.063Z cpu3:4011)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:35:30.827Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:36:37.953Z cpu6:2054)NFS: 292: Restored connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:40:08.242Z cpu6:2054)NFSLock: 608: Stop accessing fd 0x41000a4c3e68 3 2011-11-30T06:40:34.647Z cpu3:2051)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:44:42.663Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:44:53.973Z cpu0:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:51:28.296Z cpu5:2058)NFSLock: 608: Stop accessing fd 0x41000ae3c528 3 2011-11-30T06:51:44.024Z cpu4:2052)NFSLock: 568: Start accessing fd 0x41000ae3b8e8 again 2011-11-30T06:56:30.758Z cpu4:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:56:53.389Z cpu7:2055)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T07:01:50.350Z cpu6:2054)ScsiDeviceIO: 2316: Cmd(0x41240072bc80) 0x12, CmdSN 0x9803 to dev "naa.600508e000000000505c16815a36c50d" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. 2011-11-30T07:03:48.449Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000ae46b68 3 2011-11-30T07:03:57.318Z cpu4:4009)NFSLock: 568: Start accessing fd 0x41000ae48228 again (I've put a complete dump from one of the hosts on pastebin: http://pastebin.com/Vn60wgTt) When I got in the office at 9am, I saw various failures and alarms and troubleshooted the issue. It turned out that pretty much all of the VMs were inaccessible, and that the ESX hosts either were describing each VM as 'powered off', 'powered on', or 'unavailable'. The VMs described as 'powered on' where not in any way reachable or responding to pings, so this may be lies. There's absolutely no indication on the X1600 that anything was awry, and nothing on the switches to indicate any loss of connectivity. I only managed to resolve the issue by rebooting the ESX hosts in turn. I have a number of questions: What the hell happened? If this was a temporary NFS failure, why did it put the ESX hosts into a state from which a reboot was the only recovery? In the future, when the NFS server goes a little off-piste, what would be the best approach to add some resilience? I've been looking at budgeting for next year and potentially have budget to purchase another X1600/D2700/disks, would an identical mirrored disk setup help to mitigate these sorts of failures automatically? Edit (Added requested details) To expand with some details as requested: The X1600 has 12x 1TB disks lumped together in mirrored pairs as tank, and the D2700 (connected with a mini SAS cable) has 12x 300GB 10k SAS disks lumped together in mirrored pairs as sastank zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7t0d0s0 ONLINE 0 0 0 errors: No known data errors pool: sastank state: ONLINE scan: scrub repaired 0 in 74h21m with 0 errors on Wed Nov 30 02:51:58 2011 config: NAME STATE READ WRITE CKSUM sastank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t14d0 ONLINE 0 0 0 c7t15d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t16d0 ONLINE 0 0 0 c7t17d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t18d0 ONLINE 0 0 0 c7t19d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t20d0 ONLINE 0 0 0 c7t21d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t22d0 ONLINE 0 0 0 c7t23d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t24d0 ONLINE 0 0 0 c7t25d0 ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scan: scrub repaired 0 in 17h28m with 0 errors on Mon Nov 28 17:58:19 2011 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 c7t6d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t8d0 ONLINE 0 0 0 c7t9d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t10d0 ONLINE 0 0 0 c7t11d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t12d0 ONLINE 0 0 0 c7t13d0 ONLINE 0 0 0 errors: No known data errors The filesystem exposed over NFS for the primary datastore is sastank/VMStorage zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 45.1G 13.4G 92.5K /rpool rpool/ROOT 2.28G 13.4G 31K legacy rpool/ROOT/solaris 2.28G 13.4G 2.19G / rpool/dump 15.0G 13.4G 15.0G - rpool/export 11.9G 13.4G 32K /export rpool/export/home 11.9G 13.4G 32K /export/home rpool/export/home/andrew 11.9G 13.4G 11.9G /export/home/andrew rpool/swap 15.9G 29.2G 123M - sastank 1.08T 536G 33K /sastank sastank/VMStorage 1.01T 536G 1.01T /sastank/VMStorage sastank/comstar 71.7G 536G 31K /sastank/comstar sastank/comstar/sql_tempdb 6.31G 536G 6.31G - sastank/comstar/sql_tx_data 65.4G 536G 65.4G - tank 4.79T 578G 42K /tank tank/FTP 269G 578G 269G /tank/FTP tank/ISO 28.8G 578G 25.9G /tank/ISO tank/backupstage 2.64T 578G 2.49T /tank/backupstage tank/cifs 301G 578G 297G /tank/cifs tank/comstar 1.54T 578G 31K /tank/comstar tank/comstar/msdtc 1.07G 579G 32.8M - tank/comstar/quorum 577M 578G 47.9M - tank/comstar/sqldata 1.54T 886G 304G - tank/comstar/vsphere_lun 2.09G 580G 22.2M - tank/mcs-asset-repository 7.01M 578G 6.99M /tank/mcs-asset-repository tank/mscs-quorum 55K 578G 36K /tank/mscs-quorum tank/sccm 16.1G 578G 12.8G /tank/sccm As for the networking, all connections between the X1600, the Blades and the switch are either LACP or Etherchannel bonded 2x 1Gbit links. Switch is a single Cisco 3750. Storage traffic sits on its own VLAN segregated from VM machine traffic.

    Read the article

  • How can I prevent PermGen space errors in Netbeans?

    - by DR
    Every 15-30 minutes Netbeans shows a "java.lang.OutOfMemoryError: PermGen space". From what I learned from Google this seems to be related to classloader leaks or memory leaks in general. Unfortunatly all suggestions I found were related to application servers and I have no idea to adapted them to Netbeans. (I'm not even sure it's the same problem) Is it a problem in my application? How can I find the source?

    Read the article

  • disable javascript in a UIWebView

    - by adrien Coye
    Hi, Is there a way to disable the Javascript in a UIwebView ? I have memory consumption problem, and I'm looking for a trick here because it seems to not be be possible with the official stuff. Maybe a call with javascript can stop javascript itself, i don't know. Thanks in advance for any help, Adrian C

    Read the article

  • Unloading vertex buffers in OpenGL

    - by Jeremy Statz
    I have an Android live wallpaper that I suspect is leaking memory, probably either textures or vertex arrays. I'm calling glDeleteTextures on my texture IDs, but don't see any sort of equivalent for my vertex buffers. I'd like to be able to be sure both my textures and buffers are getting unloaded by OpenGL, am i missing something? The documents I've found seem to suggest OpenGL just works it out on its own, but that's not giving me a lot of comfort.

    Read the article

  • Why is SpringSource Tool Suite (STS) so slow? And how can I fix it?

    - by colbeerhey
    I've been running STS 2.3.2 on a MacBook Pro for a few days now. I'm finding the performance to be significantly slower than any other build of Eclipse I've used. For example, switching from one tab to another can take up to 4 seconds. I tried turning off much of the validation, and increasing the memory, but it's not making a difference. Are others having similar experiences?

    Read the article

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • Create certificate for a client app in .NET

    - by galets
    I'm looking for a server app to be routinely generating certificates for client applications using self-signed root. Is there any streamlined process in .NET to programmatically generate those certificates? I can, of course, keep spawning makecert or openssl, but I was looking for more programmatic, in-memory method, when you just get X509Certificate on output. If someone got a code snippet, can you please share?

    Read the article

  • How can I find out if an instance is a MarshalByRef proxy?

    - by Will
    I know there's a way, I know I've done it (a long time) before, but I can't remember or find out how to do it!!! var otherDomain = AppDomain.Create("Lol my memory sucks"); var myRemotableType = typeof(MyTypeThatExtendsMBRO); var proxy = otherDomain .CreateInstanceAndUnwrap( type.Assembly.FullName, type.FullName); // how do you do this next step??? bool isProxy = IsYouIsOrIsYouAintAProxy(proxy);

    Read the article

  • Anyone using Moles / Pex in production?

    - by dferraro
    Hi all, I did search the forum and did not find a similar question. I'm looking to make a final decision on our mocking framework of choice moving forward as a best practice - I've decided on Moq... untill I just recently discovered MS has finally created a mocking framework called Moles which seems to work similar to TypeMock via the profiler API sexyness etc.. There's a million 'NMock vs Moq vs TypeMock vs Rhino....' threads on here. But I never see Moles involved.In fact, I did not even know if its existence until a short time ago. Anyone using it? In Production? Anyone dump their old mocking framework for it, and if so, which one? How did it compare to ther mocking frameworks you've used? thanks.. ps, we are using VS2008 and are moving to 2010 shortly.

    Read the article

  • C#: IComparable implementation private

    - by Anonymous Coward
    Hello I'm new to C# so this might be a really dump question: I implemented IComparable in my class and want to test it with NUnit. But the CompareTo-Method is marked as private and thus not accessible from the test. What's the reason for this and how can I fix this? The IComparable: public class PersonHistoryItem : DateEntity,IComparable { ... int IComparable.CompareTo(object obj) { PersonHistoryItem phi = (PersonHistoryItem)obj; return this.StartDate.CompareTo(phi.StartDate); } } The test: [TestMethod] public void TestPersonHistoryItem() { DateTime startDate = new DateTime(2001, 2, 2); DateTime endDate = new DateTime(2010, 2, 2); PersonHistoryItem phi1 = new PersonHistoryItem(startDate,endDate); PersonHistoryItem phi2 = new PersonHistoryItem(startDate, endDate); Assert.IsTrue(phi1.CompareTo(phi2)==0); }

    Read the article

  • who free's setvbuf buffer?

    - by Evan Teran
    So I've been digging into how the stdio portion of libc is implemented and I've come across another question. Looking at man setvbuf I see the following: When the first I/O operation occurs on a file, malloc(3) is called, and a buffer is obtained. This makes sense, your program should have a malloc in it for I/O unless you actually use it. My gut reaction to this is that libc will clean up its own mess here. Which I can only assume it does because valgrind reports no memory leaks (they could of course do something dirty and not allocate it via malloc directly... but we'll assume that it literally uses malloc for now). But, you can specify your own buffer too... int main() { char *p = malloc(100); setvbuf(stdio, p, _IOFBF, 100); puts("hello world"); } Oh no, memory leak! valgrind confirms it. So it seems that whenever stdio allocates a buffer on its own, it will get deleted automatically (at the latest on program exit, but perhaps on stream close). But if you specify the buffer explicitly, then you must clean it up yourself. There is a catch though. The man page also says this: You must make sure that the space that buf points to still exists by the time stream is closed, which also happens at program termination. For example, the following is invalid: Now this is getting interesting for the standard streams. How would one properly clean up a manually allocated buffer for them, since they are closed in program termination? I could imagine a "clean this up when I close flag" inside the file struct, but it get hairy because if I read this right doing something like this: setvbuf(stdio, 0, _IOFBF, 100); printf("hello "); setvbuf(stdio, 0, _IOLBF, 100); printf("world\n"); would cause 2 allocations by the standard library because of this sentence: If the argument buf is NULL, only the mode is affected; a new buffer will be allocated on the next read or write operation.

    Read the article

  • MySQL unique clustered constraint not constraining as expected

    - by igor
    I'm creating a table with: CREATE TABLE movies ( id INT AUTO_INCREMENT PRIMARY KEY, name CHAR(255) NOT NULL, year INT NOT NULL, inyear CHAR(10), CONSTRAINT UNIQUE CLUSTERED (name, year, inyear) ); (this is jdbc SQL) Which creates a MySQL table with a clustered index, "index kind" is "unique", and spans the three clustered columns: full size However, once I dump my data (without exceptions thrown), I see that the uniqueness constraint has failed: SELECT * FROM movies WHERE name = 'Flawless' AND year = 2007 AND inyear IS NULL; gives: id, name, year, inyear 162169, 'Flawless', 2007, NULL 162170, 'Flawless', 2007, NULL Does anyone know what I'm doing wrong here?

    Read the article

  • Ruby zlib deflate massive data

    - by Bub Bradlee
    I'm trying to use Zlib::Deflate.deflate on a massive file (4 gigs). There are obvious problems with doing that, the first of which being that I can't load the entire file into memory all at once. Zlib::GzipWriter would work, since it works with streams, but it's not zlib compression. Any ideas?

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >