Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 215/1338 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • FATA disk performance for VMware

    - by Sergei
    Hi, We are moving to the dataceneter and planning to have tiered storage on EVA4400 - FC RAID 10 for SQL databases and RAID5 across 24 FATA 1TB disks form VMware ESX guests.HP is describing FATA disks as suitable for near online storage, however I am not convinced that 24 spindles will not be enough for running VMWare for 3 ESX servers. Does anyone has opinion on why this could be a such a bad idea?

    Read the article

  • MSMQ Resilience

    - by Paddy Carroll
    I have a requirement for a resilient MSMQ setup on VMWare ESX5. I am aware that we cannot allow the queue storage to be shared as it must be installed on physical disk mount, e.g. it cant be an CIFS or DFS Share. The following constraints apply: We don't use windows clustering We dont't rely on hot standbys Is there a way I can replicate the queue storage to another platform so that it can assume MSMQ duties on failure of the primary platforms using any method including queue forwarding?

    Read the article

  • How to remove iso 9660 from USB?

    - by a_m0d
    I have somehow managed to write an iso 9660 image onto my USB drive, which makes all my computer think that the device is actually a CD. I have tried various methods of removing this partition, but nothing seems to work. I have tried fdisk, which says $ fdisk -l /dev/sdb Cannot open /dev/sdb parted crashes when I try to use it on this device. I have even tried $ dd if=/dev/zero of=/dev/sdb but it just hangs with no output (either on screen or on disk). However, when I plug the USB in, it does mount, and I can view (but not edit) the files on it. edit: now the result is $ dd if=/dev/zero of=/dev/sdb dd: opening `/dev/sdb': Read-only file system I have also tried re-formatting it on Windows, but it gets to the end of the format process and then says "Couldn't format the drive". How can I remove this partition and get my whole USB drive back to normal again? EDIT 1: Trying a simple mkfs doesn't work: $ sudo mkfs -t vfat /dev/sdb mkfs.vfat 3.0.0 (28 Sep 2008) mkfs.vfat: Will not try to make filesystem on full-disk device '/dev/sdb' (use -I if wanted) I can't do mkfs on /dev/sdb1 because there is no such partition, as shown:$ ls /dev | grep sdb sdb EDIT 2: This is the information posted by dmesg when I plug the device in:$ dmesg . . (snip) . usb 2-1: New USB device found, idVendor=058f, idProduct=6387 usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-1: Product: Mass Storage usb 2-1: Manufacturer: Generic usb 2-1: SerialNumber: G0905000000000010885 usb-storage: device found at 4 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access FLASH Drive AU_USB20 8.07 PQ: 0 ANSI: 2 sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sdb: unknown partition table sd 6:0:0:0: [sdb] Attached SCSI removable disk sd 6:0:0:0: Attached scsi generic sg2 type 0 ISO 9660 Extensions: Microsoft Joliet Level 3 ISO 9660 Extensions: RRIP_1991A SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts CE: hpet increasing min_delta_ns to 15000 nsec This shows that the device is formatted as ISO 9660 and that it is /dev/sdb. EDIT 3: This is the message that I find at the bottom of dmesg after running cfdisk and writing a new partition table to the disk:SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts sd 17:0:0:0: [sdb] Device not ready: Sense Key : Not Ready [current] sd 17:0:0:0: [sdb] Device not ready: < ASC=0xff ASCQ=0xffASC=0xff < ASCQ=0xff end_request: I/O error, dev sdb, sector 0 Buffer I/O error on device sdb, logical block 0 lost page write due to I/O error on sdb

    Read the article

  • Is keeping the primary hard disk as disk C: still relevant?

    - by Jeremy French
    Back in the day, floppy disks were a: and if you were lucky b:, then when permanent storage came along c: was the default for hard disks (as I remember it) Now that many computers no longer have floppy disks is it possible to have your primary hard disk as A: is the convention out dated? Removable drives (like DVDs and flash readers) now seem to take lower precedence than permanent storage so it is a bit of an oddity that floppy disks should have higher letters.

    Read the article

  • Windows DFS File System Clustering

    - by tearman
    We're attempted to set up a high availability network for our file servers, and we're wanting to do a DFS file system cluster using the same back-end storage (our back-end storage has its own clustering mechanisms that it manages itself). The question being, A. how would one go about setting up DFS clustering, and B. how can we get Windows to cooperate with multiple servers accessing the same SAN volumes?

    Read the article

  • Utility for easily disabling/enabling extra hard drives?

    - by SkippyFire
    I just got an Asus G60 laptop, and will be installing an SSD as the primary, and will use the existing HDD as a storage drive. Is there a utility that I can use to turn off/disconnect the storage drive when I'm not using it? Mainly, I want to be able to conserve power when I'm mobile, since the battery life of this laptop is pretty weak. Thanks in advance!

    Read the article

  • Relationship between RAM & processor speed

    - by deostroll
    RAM is just used for temporary storage. But since this storage is in the cpu memory (RAM) it is fast. Programs can easily read/write values into it. I've noticed more the RAM less time it takes for the application to load/execute. But doesn't this actually depend of the processor speed (MHz or GHz values). I am wondering what is the science/relationship between processor speed and RAM.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • USB Flash not recognised by Windows and BIOS, but works fine in Linux

    - by bbalegere
    I have a Transcend JetFLash 2GB USB Drive.It was working fine and I had been using it occasionally. All of sudden it stopped working in all versions of Windows . The USB Drive is also not recognised by the BIOS.It does not show in the list of bootable devices.(It used show up in the list earlier) However the USB Drive works fine in my Linux Mint 11 OS. Running dmesg gives this [ 941.812192] usb 1-2: new high speed USB device using ehci_hcd and address 4 [ 941.936178] usb 1-2: device descriptor read/64, error -71 [ 942.164188] usb 1-2: device descriptor read/64, error -71 [ 942.380189] usb 1-2: new high speed USB device using ehci_hcd and address 5 [ 942.504138] usb 1-2: device descriptor read/64, error -71 [ 942.732179] usb 1-2: device descriptor read/64, error -71 [ 942.948154] usb 1-2: new high speed USB device using ehci_hcd and address 6 [ 943.364134] usb 1-2: device not accepting address 6, error -71 [ 943.476172] usb 1-2: new high speed USB device using ehci_hcd and address 7 [ 943.892140] usb 1-2: device not accepting address 7, error -71 [ 943.892191] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 944.296190] usb 2-2: new full speed USB device using uhci_hcd and address 3 [ 944.438251] usb 2-2: not running at top speed; connect to a high speed hub [ 944.709928] usbcore: registered new interface driver uas [ 944.729999] Initializing USB Mass Storage driver... [ 944.730509] scsi6 : usb-storage 2-2:1.0 [ 944.730908] usbcore: registered new interface driver usb-storage [ 944.730917] USB Mass Storage support registered. [ 945.736320] scsi 6:0:0:0: Direct-Access JetFlash Transcend 2GB 8.07 PQ: 0 ANSI: 2 [ 945.744547] sd 6:0:0:0: Attached scsi generic sg1 type 0 [ 945.753316] sd 6:0:0:0: [sdb] 3944448 512-byte logical blocks: (2.01 GB/1.88 GiB) [ 945.758274] sd 6:0:0:0: [sdb] Write Protect is off [ 945.758288] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 [ 945.765167] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.765181] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 945.784309] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.784323] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.239512] sdb: sdb1 [ 946.257279] sd 6:0:0:0: [sdb] No Caching mode page present [ 946.257292] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.257302] sd 6:0:0:0: [sdb] Attached SCSI removable disk Looks like there is something wrong the USB Drive.It is not recognised in any computer running Windows. Is there any way to fix this? Any idea why this problem occurred ?

    Read the article

  • How do i enable innodb on ubuntu server 10.04

    - by Matt
    Here is my entire my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] key_buffer = 224M sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 12M query_cache_size = 44M # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # #key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M #query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ And here is my show engines call....i have no idea what i need to do to enable innodb show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | NO | NO | NO | | MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO | | CSV | YES | CSV storage engine | NO | NO | NO | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO | | FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL | | ARCHIVE | YES | Archive storage engine | NO | NO | NO | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ 7 rows in set (0.00 sec)

    Read the article

  • Optiplex can't find SATA III Controller

    - by Joel Rodgers
    I just purchased a HighPoint Rocket 620 Storage controller- Serial ATA-600- 600 MBps (OEM version) and a OWC SSD: For some reason, my Dell Optiplex 755 bios sees this card as a storage device installed in the x1 PCI Express slot, but I can't get it to boot from it. In fact, I don't even see the boot screen as mentioned by the manual. Any help would be greatly appreciated. FYI, I tried every imaginable BIOS setting, including using legacy mode instead of AHCI.

    Read the article

  • How to attach files to an email in Windows Phone 7.5 Mango

    - by Vaibhav Garg
    In the default email client in Windows Phone 7.5 Mango, how can arbitrary files(.zip, .mp3, .txt, .pdf etc) be attached. As the storage is sand-boxed, the file handler can implement hooks to the email client, as MS-Office does and Adobe Reader doesn't, but the email client can not access files in the Phone's storage. Is there a way, or a work around? I my usage pattern, I tend to send a lot of pdfs, and am unable to do that!

    Read the article

  • Freeware (preferably open-source) tool for creating multi-file spanning archives as a self merging SFX

    - by Lockszmith
    I have a large file I want to transfer using either Internet storage hosting, DVD-Rs or USB storage, which sometimes is limited to FAT file-systems (for example: mobile phones) What I'm basically looking for is a tool that create multiple files/volumes (less than 2GB each - FAT's file size limit) which are packed with a self-extracting executable. Currently the only tool I found doing this is WinRAR, but that's shareware, and not free. Is there any Free, preferably Open-Source tool that does that? Thank in advance

    Read the article

  • can i find the web hosting company from an ip address ?

    - by ufk
    Hiya. i really hope this question suites serverfault, if not my apologies! I have an ip address, is there a way to find the web hosting service that this ip address belongs to ? I tried using whois and traceroute but no luck so far. the case is that my friend bought a domain and storage several years ago and he can't remember where he bought the storage from. thanks!

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Interview question: Develop an application that can display trail period expires after 30 days witho

    - by Algorist
    Hi, I saw this question in a forum about how an application can be developed that can keep track of the installation date and show trail period expired after 30 days of usage. The only constraint is not to use the external storage of any kind. Question: How to achieve this? Thanks Bala --Edit I think its easy to figure out the place to insert a question work. Anyway, I will write the question clearly. "external storage" means don't use any kind of storage like file, registry, network or anything. You only have your program.

    Read the article

  • Splitting assemblies - finding the balance (avoiding overkill)

    - by M.A. Hanin
    I'm writing a wide component infrastructure, to be used in my projects. Since not all projects will require every component created, I've been thinking of splitting the component into discrete assemblies, so that every application developed will only be deployed with the required assemblies. I assume that creating an assembly has some storage overhead (the assembly's code, wrapping whatever is inside). Therefore, there must be some limit to the advantage gained by splitting an assembly - a certain point where splitting the assembly is worse than keeping it united (storage-wise and performance-wise). Now, here is the question: how do I know when splitting an assembly is an overkill? P.S I guess there are other overheads to assembly splitting, aside from the storage overhead. If anyone can point out these overheads, it would be much appreciated.

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • Can I use a single DateTime field on the Entity Framework model side when the value is stored in a set of Int fields in the actual database?

    - by Ivan
    The actual table in the database has separate integer fields for storing year, month, day, hour and minute values (all in UTC) (seconds and milliseconds are irrelevant for my task and considered equal to zero). Needless to say it would be of great convenience to have just one field of DateTime type on the application side and hide all the conversion under the cover of the Entity Framework model code. Any directions on how to do that? I am not very experienced with Entity Framework yet.

    Read the article

  • python optparse, how to include additional info in usage output?

    - by CarpeNoctem
    Using python's optparse module I would like to add extra example lines below the regular usage output. My current help_print() output looks like this: usage: check_dell.py [options] options: -h, --help show this help message and exit -s, --storage checks virtual and physical disks -c, --chassis checks specified chassis components I would like it to include usage examples for the less *nix literate users at my work. Something like this: usage: check_dell.py [options] options: -h, --help show this help message and exit -s, --storage checks virtual and physical disks -c, --chassis checks specified chassis components Examples: check_dell -c all check_dell -c fans memory voltage check_dell -s How would I accomplish this? What optparse options allow for such? Current code: import optparse def main(): parser = optparse.OptionParser() parser.add_option('-s', '--storage', action='store_true', default=False, help='checks virtual and physical disks') parser.add_option('-c', '--chassis', action='store_true', default=False, help='checks specified chassis components') (opts, args) = parser.parse_args()

    Read the article

  • MongoDB in Go (golang) with mgo: How do I update a record, find out if update was successful and get the data in a single atomic operation?

    - by Sebastián Grignoli
    I am using mgo driver for MongoDB under Go. My application asks for a task (with just a record select in Mongo from a collection called "jobs") and then registers itself as an asignee to complete that task (an update to that same "job" record, setting itself as assignee). The program will be running on several machines, all talking to the same Mongo. When my program lists the available tasks and then picks one, other instances might have already obtained that assignment, and the current assignment would have failed. How can I get sure that the record I read and then update does or does not have a certain value (in this case, an assignee) at the time of being updated? I am trying to get one assignment, no matter wich one, so I think I should first select a pending task and try to assign it, keeping it just in the case the updating was successful. So, my query should be something like: "From all records on collection 'jobs', update just one that has asignee=null, setting my ID as the assignee. Then, give me that record so I could run the job." How could I express that with mgo driver for Go?

    Read the article

  • Remote DocumentRoot in Apache gives a 404

    - by kshouler
    I have the following specified in my httpd.conf, but I get a 404 when attempting to connect to the server from another machine. If I set the docroot to the default htdocs directory, everything works fine. (note.. I've also tried replacing the "//storage/data1" part of the path with the network drive letter "U:") ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" DocumentRoot "//storage/data1/Engineering/Product Development" <Directory "//storage/data1/Engineering/Product Development"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>

    Read the article

  • Why SELECT N + 1 with no foreign keys and LINQ?

    - by Daniel Flöijer
    I have a database that unfortunately have no real foreign keys (I plan to add this later, but prefer not to do it right now to make migration easier). I have manually written domain objects that map to the database to set up relationships (following this tutorial http://www.codeproject.com/Articles/43025/A-LINQ-Tutorial-Mapping-Tables-to-Objects), and I've finally gotten the code to run properly. However, I've noticed I now have the SELECT N + 1 problem. Instead of selecting all Product's they're selected one by one with this SQL: SELECT [t0].[id] AS [ProductID], [t0].[Name], [t0].[info] AS [Description] FROM [products] AS [t0] WHERE [t0].[id] = @p0 -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [65] Controller: public ViewResult List(string category, int page = 1) { var cat = categoriesRepository.Categories.SelectMany(c => c.LocalizedCategories).Where(lc => lc.CountryID == 1).First(lc => lc.Name == category).Category; var productsToShow = cat.Products; var viewModel = new ProductsListViewModel { Products = productsToShow.Skip((page - 1) * PageSize).Take(PageSize).ToList(), PagingInfo = new PagingInfo { CurrentPage = page, ItemsPerPage = PageSize, TotalItems = productsToShow.Count() }, CurrentCategory = cat }; return View("List", viewModel); } Since I wasn't sure if my LINQ expression was correct I tried to just use this but I still got N+1: var cat = categoriesRepository.Categories.First(); Domain objects: [Table(Name = "products")] public class Product { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int ProductID { get; set; } [Column] public string Name { get; set; } [Column(Name = "info")] public string Description { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "productId", ThisKey = "ProductID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Category> Categories { get { return (from pc in ProductCategories select pc.Category).ToList(); } } } [Table(Name = "products_menu")] class ProductCategory { [Column(IsPrimaryKey = true, Name = "products_id")] private int productId; private EntityRef<Product> _product = new EntityRef<Product>(); [System.Data.Linq.Mapping.Association(Storage = "_product", ThisKey = "productId")] public Product Product { get { return _product.Entity; } set { _product.Entity = value; } } [Column(IsPrimaryKey = true, Name = "products_types_id")] private int categoryId; private EntityRef<Category> _category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_category", ThisKey = "categoryId")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } [Table(Name = "products_types")] public class Category { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int CategoryID { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "categoryId", ThisKey = "CategoryID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Product> Products { get { return (from pc in ProductCategories select pc.Product).ToList(); } } private EntitySet<LocalizedCategory> _LocalizedCategories = new EntitySet<LocalizedCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_LocalizedCategories", OtherKey = "CategoryID")] public ICollection<LocalizedCategory> LocalizedCategories { get { return _LocalizedCategories; } set { _LocalizedCategories.Assign(value); } } } [Table(Name = "products_types_localized")] public class LocalizedCategory { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int LocalizedCategoryID { get; set; } [Column(Name = "products_types_id")] private int CategoryID; private EntityRef<Category> _Category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_Category", ThisKey = "CategoryID")] public Category Category { get { return _Category.Entity; } set { _Category.Entity = value; } } [Column(Name = "country_id")] public int CountryID { get; set; } [Column] public string Name { get; set; } } I've tried to comment out everything from my View, so nothing there seems to influence this. The ViewModel is as simple as it looks, so shouldn't be anything there. When reading this ( http://www.hookedonlinq.com/LinqToSQL5MinuteOVerview.ashx) I started suspecting it might be because I have no real foreign keys in the database and that I might need to use manual joins in my code. Is that correct? How would I go about it? Should I remove my mapping code from my domain model or is it something that I need to add/change to it? Note: I've stripped parts of the code out that I don't think is relevant to make it cleaner for this question. Please let me know if something is missing.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >