Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 280/1205 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • When is the onPreExecute called on an AsyncTask running parallely or concurrently?

    - by Debarshi Dutta
    I am using Android HoneyComb.I need to execute some tasks parallely and I am using AsyncTask's public final AsyncTask executeOnExecutor (Executor exec, Params... params) method.In each separate thread I am computing some values and I need to store then in an ArrayList.I must then sort all the values in the arrayList and then display it in the UI.Now my question is if one of the thread gets completed earlier than the other then will it immediately call the onPostExecute method or onPostExecute method will be called after all the background threads have been completed?MY program implementation depends on what occurs here.

    Read the article

  • Out of memory error

    - by Rahul Varma
    Hi, I am trying to retrieve a list of images and text from a web service. I have first coded to get the images to a list using Simple Adapter. The images are getting displayed the app is showing an error and in the Logcat the following errors occur... 04-26 10:55:39.483: ERROR/dalvikvm-heap(1047): 8850-byte external allocation too large for this process. 04-26 10:55:39.493: ERROR/(1047): VM won't let us allocate 8850 bytes 04-26 10:55:39.563: ERROR/AndroidRuntime(1047): Uncaught handler: thread Thread-96 exiting due to uncaught exception 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:451) 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv.loadImageFromUrl(AsyncImageLoaderv.java:57) 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv$2.run(AsyncImageLoaderv.java:41) 04-26 10:55:40.393: ERROR/dalvikvm-heap(1047): 14600-byte external allocation too large for this process. 04-26 10:55:40.403: ERROR/(1047): VM won't let us allocate 14600 bytes 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): Uncaught handler: thread Thread-93 exiting due to uncaught exception 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:451) 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv.loadImageFromUrl(AsyncImageLoaderv.java:57) 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv$2.run(AsyncImageLoaderv.java:41) 04-26 10:55:40.594: INFO/Process(584): Sending signal. PID: 1047 SIG: 3 Here's the coding in the adapter... final ImageView imageView = (ImageView) rowView.findViewById(R.id.image); AsyncImageLoaderv asyncImageLoader=new AsyncImageLoaderv(); Bitmap cachedImage = asyncImageLoader.loadDrawable(imgPath, new AsyncImageLoaderv.ImageCallback() { public void imageLoaded(Bitmap imageDrawable, String imageUrl) { imageView.setImageBitmap(imageDrawable); } }); imageView.setImageBitmap(cachedImage); .......... ........... ............ //To load the image... public static Bitmap loadImageFromUrl(String url) { InputStream inputStream;Bitmap b; try { inputStream = (InputStream) new URL(url).getContent(); BitmapFactory.Options bpo= new BitmapFactory.Options(); bpo.inSampleSize=2; b=BitmapFactory.decodeStream(inputStream, null,bpo ); return b; } catch (IOException e) { throw new RuntimeException(e); } // return null; } Please tell me how to fix the error....

    Read the article

  • ASP.NET Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • Components are not longer resizable after moving

    - by Junior Software Developer
    Hi guys My question relates to swing programming. I want to enlarge a component (component x) by removing it from its parent panel (component a) and adding it in one of component a's parent (component b). Before that, I call setVisible(false) on all components in b. After that I want to make this back by removing it from b and adding on a. After that all components are not longer resizable. Why that? An easy example: import java.awt.BorderLayout; import java.awt.Color; import java.awt.Component; import java.awt.Dimension; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JTabbedPane; public class SwingTest { private static ViewPanel layer1; private static JFrame frame; private static JTabbedPane tabbedPane; private static ViewPanel root; public static void main(String[] args) { frame = new JFrame(); frame.setLayout(new BorderLayout()); frame.setMinimumSize(new Dimension(800, 600)); root = new ViewPanel(); root.setBackground(Color.blue); root.setPreferredSize(new Dimension(400, 600)); root.setLayout(new BorderLayout()); root.add(new JLabel("blue area")); layer1 = new ViewPanel(); layer1.setBackground(Color.red); layer1.setPreferredSize(new Dimension(400, 600)); layer1.setLayout(new BorderLayout()); tabbedPane = new JTabbedPane(); tabbedPane.add("A", new JLabel("A label")); tabbedPane.setPreferredSize(new Dimension(400, 600)); layer1.add(tabbedPane); root.add(layer1); frame.add(root, BorderLayout.NORTH); frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); frame.pack(); frame.setVisible(true); Thread t = new Thread() { @Override public void run() { try { Thread.sleep(8000); System.out.println("start"); for (Component c : root.getComponents()) { c.setVisible(false); } layer1.remove(tabbedPane); root.add(tabbedPane); Thread.sleep(8000); root.remove(tabbedPane); layer1.add(tabbedPane); for (Component c : root.getComponents()) { c.setVisible(true); c.repaint(); } } catch (InterruptedException e) { //... } } }; t.start(); } }

    Read the article

  • Will a higher hard drive size affect performance

    - by user273010
    My laptop came with a 500 GB hard drive. I use my laptop for storing my digital photographs, and only have about 14 GB of file storage left on the original hard drive. I have a 750 GB external hard drive, but am leery of relying on it for primary storage as I tend to knock things over and it has already crashed once and I lost a lot of the files. I am looking at a 1 TB internal hard drive, but am concerned if storing so much data will affect the computer's performance. Should I also increase RAM from 4 to 8 GB (the limit for my 64-bit, Windows 7, Asus A54C laptop)?

    Read the article

  • Non-blocking MySQL updates with java?

    - by justkevin
    For a multiplayer game I'm working on I'd like to record events to the mysql database without blocking the game update thread so that if the database is busy or a table is locked the game doesn't stop running while it waits for a write. What's the best way to accomplish this? I'm using c3p0 to manage the database connection pool. My best idea so far is to add query update strings to a synchronized list with an independent thread checking the list every 100ms and executing the queries it finds there.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Start a ZFS RAIDZ zpool with two discs then add a third?

    - by Doug S.
    Let's say I have two 2TB HDDs and I want to start my first ZFS zpool. Is it possible to create a RAIDZ with just those two discs, giving me 2TB of usable storage (if I understand it right) and then later add another 2TB HDD bringing the total to 4TB of usable storage. Am I correct or does there need to be three HDDs to start with? The reason I ask is I already have one 2TB drive I'm using that's full of files. I want to transition to a zpool but I'd rather only buy two more 2TB drives if I can. From what I understand, RAIDZ behaves similarly to RAID5 (with some major differences, I know, but in terms of capacity). However, RAID5 requires 3+ drives. I was wondering if RAIDZ has the same requirement. If I have to, I can buy the three drives and just start there, later adding the fourth, but if I could start with two and move to three that would save me $80.

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • iMac boot from linux partition on external drive

    - by user74757
    I have the following "setup:" iMac (no internal drive/dead) --------- (Firewire) ------- [[MAC OS X]] | | | | (USB) | | | | [[MISC STORAGE PARTITION] [MISC STORAGE PARTITION] [EXT2 UBUNTU PARTITION]] I routinely use the firewire drive to boot MAC OS X. However, I would like to boot from the linux partition of the USB drive. This linux partition had linux installed on it from a live cd, and during that process, I told the installer to install GRUB on the usb drive (which happened to be /dev/sdd). My question is, how do I get this disk to show up during the iMac option-boot? Currently, only the firewire MAC OS X option shows up. I have read about rEFIT, but that appears to install it to the Mac OS X disk (would that still work?)... Also mentioned was installing rEFIT to the internal EFI system partition, but I don't know if that is wise.

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • How to specify pessimistic lock with Criteria API?

    - by Reddy
    I am retrieving a list of objects in hibernate using Criteria API. However I need lock on those objects as another thread executing at the same time will get the exact objects and only one of the thread will succeed in absence of a pessimistic lock. I tried like below, but it is not working. List esns=session.createCriteria(Reddy_Pool.class) .add(Restrictions.eq("status", "AVAILABLE")) .add(Restrictions.eq("name", "REDDY2")) .addOrder(Order.asc("id")) .setMaxResults(n) .setLockMode(LockMode.PESSIMISTIC_WRITE) //not working at all .list();

    Read the article

  • Is single float assignment an atomic operation on the iPhone?

    - by iter
    I assume that on a 32-bit device like the iPhone, assigning a short float is an atomic, thread-safe operation. I want to make sure it is. I have a C function that I want to call from an Objective-C thread, and I don't want to acquire a lock before calling it: void setFloatValue(float value) { globalFloat = value; }

    Read the article

  • vsftpd with pam_winbind.so

    - by David
    I'm trying to setup vsftpd to use logins from our domain. I want the ftp users to be able to login using their active directory username/password and have be able to have full access to /media/storage/ftp/username. I setup pptp using winbind and it is working fine, so I belive the issue is with vsftpd and pam. The ftp server runs and gives 530 for the login. I turned on debug for the pam module, but I see nothing in the syslog. Vsftp only logs a wrong login in its log. /etc/pam.d/vsftpd auth required pam_winbind.so debug /etc/vsftpd.conf listen=YES listen_ipv6=NO connect_from_port_20=YES anonymous_enable=NO local_enable=YES write_enable=YES xferlog_enable=YES idle_session_timeout=600 data_connection_timeout=120 nopriv_user=ftp ftpd_banner=Welcome to Scantiva! Authorized access only! local_umask=022 local_root=/media/storage/ftp/$USER user_sub_token=$USER chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd guest_enable=YES guest_username=ftp ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • Best Solution For My Requirements

    - by Eray
    Hello, I'm a web developer. I have a few small online web application and a few Wordpress blogs. But i don't have too much experience with installing / configuring web servers . One of my web application needs cron jobs. It will check a lot of web sites availability. And, this application will leech too much RAM. And i think shared-hosting isn't suitable for this. But 1GB storage is enough i think. I don't need too much storage for my web sites. What do you think ? Which hosting solution is more suitable for my requirements ? Reseller ? VPS ? Cloud Server ? etc ...

    Read the article

  • Problem with USB drivers (Windows-XP)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • WF State Persistence Collision.

    - by jlafay
    How does WF service handle possible between WF runtime persisting/resuming and a client call calling the next service method/activity in the workflow? I'm new to WF and I'm developing a back-end service that will be put into production for internal use at work. How does WF handles such a scenario? Does it restart the WF runtime on a separate thread than the thread that is concurrently storing state?

    Read the article

  • Unresponsive Clojure REPL after exception

    - by Hendekagon
    If I start a REPL and then do something that throws an exception like (use 'non-existent-thing) ** then after that the REPL ceases to evaluate anything I enter. Is there a special key I can press to make it turn round, face me, uncross its arms and listen once more ? Or must I ctrl-d, restart, type everything up to where I was and get it right this time ? ** which results in: Exception in thread "Thread-1" java.lang.RuntimeException: java.io.FileNotFoundException: Could not locate non_existent_thing__init.class or non_existent_thing.clj on classpath: (NO_SOURCE_FILE:0)

    Read the article

  • Which is more robust and scalable method?

    - by Dhruv Arya
    I am implementing a distributed chat system, in this system we have the following options : Make the client and server running at each node run as separate threads. The server acting as the receiver will be running as the daemon thread and the client taking the user input as a normal thread. Fork two processes one for the client and one for the server. I am not able to reason out with which one to proceed. Any insight would be great !

    Read the article

  • Can't access individual samba shares

    - by Richard Maddis
    I've just installed CentOS and I'm configuring Samba. I have a share with the following in the smb.conf file: [storage] comment = Main storage for all use path = /share public = yes browseable = yes writable = yes printable = no write list = bob root create mask = 0775 guest ok = yes available = yes In Windows Explorer, I can reach the page listing all the shares on the server, but I click on the shares themselves, I get an error saying that the folder cannot be found. I have verified that the folder /share exists and I've also given it 777 permissions so it cannot be due to permissions. What is causing this? I can post more config files if necessary.

    Read the article

  • how to design network for connectivity between private and corporate LANs?

    - by maruti
    there is a bunch of servers connected to shared storage in a private LAN (10.x.x.x). this privateLAN is managed by a windows server (DHCP, DNS and directory services) these hosts need to be from outside of the datacenter Eg. Remote desktop. can the NIC2 on each of the hosts be connected to the other public LAN (compromising speed or security? what are improtant considerations: additional hardware? like switches? routing&DNS software? currently available hardware : Dell Powerconnect 6224 switch .... planning this for storage network. software: windows 2003 server for DHCP, DNS, A/D ? would it be more flexible to use Linux distributions like IPCOP, Untangle etc? all that I am looking for is good isolation between private and other networks, avoid DHCP, DNS, AD clashes.

    Read the article

  • Many-to-many mapping with LINQ

    - by Alexander
    I would like to perform LINQ to SQL mapping in C#, in a many-to-many relationship, but where data is not mandatory. To be clear: I have a news site/blog, and there's a table called Posts. A blog can relate to many categories at once, so there is a table called CategoriesPosts that links with foreign keys with the Posts table and with Categories table. I've made each table with an identity primary key, an id field in each one, if it matters in this case. In C# I defined a class for each table, defined each field as explicitly as possible. The Post class, as well as Category class, have a EntitySet to link to CategoryPost objects, and CategoryPost class has 2 EntityRef members to link to 2 objects of each other type. The problem is that a Post may relate or not to any category, as well as a category may have posts in it or not. I didn't find a way to make an EntitySet<CategoryPost?> or something like that. So when I added the first post, all went well with not a single SQL statement. Also, this post was present in the output. When I tried to add the second post I got an exception, Object reference not set to an instance of an object, regarding to the CategoryPost member. Post: [Table(Name="tm_posts")] public class Post : IDataErrorInfo { public Post() { //Initialization of NOT NULL fields with their default values } [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } private EntitySet<CategoryPost> _categoryRef = new EntitySet<CategoryPost>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_categoryRef", ThisKey = "ID", OtherKey = "PostID")] public EntitySet<CategoryPost> CategoryRef { get { return _categoryRef; } set { _categoryRef.Assign(value); } } } CategoryPost [Table(Name = "tm_rel_categories_posts")] public class CategoryPost { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_post", DbType = "int", CanBeNull = false)] public int PostID { get; set; } [Column(Name = "fk_category", DbType = "int", CanBeNull = false)] public int CategoryID { get; set; } private EntityRef<Post> _post = new EntityRef<Post>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_post", ThisKey = "PostID", OtherKey = "ID")] public Post Post { get { return _post.Entity; } set { _post.Entity = value; } } private EntityRef<Category> _category = new EntityRef<Category>(); [Association(Name = "tm_rel_categories_posts_fk", IsForeignKey = true, Storage = "_category", ThisKey = "CategoryID", OtherKey = "ID")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } Category [Table(Name="tm_categories")] public class Category { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_parent", DbType = "int", CanBeNull = true)] public int ParentID { get; set; } private EntityRef<Category> _parent = new EntityRef<Category>(); [Association(Name = "tm_posts_fk2", IsForeignKey = true, Storage = "_parent", ThisKey = "ParentID", OtherKey = "ID")] public Category Parent { get { return _parent.Entity; } set { _parent.Entity = value; } } [Column(Name = "name", DbType = "varchar(100)", CanBeNull = false)] public string Name { get; set; } } So what am I doing wrong? How to make it possible to insert a post that doesn't belong to any category? How to insert categories with no posts?

    Read the article

  • Not able to safely remove external disk after having mounted and unmounted a VHD on it

    - by Agnel Kurian
    I am using Windows 7 SP 1. I have an external hard disk (Seagate 500GB) which I am able to use without problems most of the time. I am able to plug it in, use it and then safely unmount it via the "Eject USB Mass Storage Device" option in the taskbar tray. However, if I attach a VHD file located on this disk using "Disk Management", then detach the VHD and finally try to safely disconnect the disk via the system tray, I get an error which says: "Problem Ejecting USB Mass Storage Device: Windows can't stop your 'Generic volume' device because a program is still using it. Close any programs that might be using the device, and then try again later." How do I avoid this problem? Which process could still be accessing the device (even after I have closed the "Disk Management" application) ?

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >