Search Results

Search found 587 results on 24 pages for 'mapper'.

Page 6/24 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Impossible remove Logical Volumes or Volume Group LVM2

    - by abkrim
    After a disk crash part of a LVM group, I can't use LVM2 properly. If like delete a Logical Volume impossible (Error on SATA Volumen) lvscan Couldn't find device with uuid vxHO8W-FPbL-9d5N-GUVb-Lo8d-D9WZ-1RY3Bx. inactive '/dev/sata/isos' [100.00 GiB] inherit inactive '/dev/sata/vm-999-disk-1' [10.00 GiB] inherit inactive '/dev/sata/vm-300-disk-1' [51.00 GiB] inherit ACTIVE '/dev/pve/vm-103-disk-1' [200.00 GiB] inherit lvremove /dev/sata/isos Couldn't find device with uuid vxHO8W-FPbL-9d5N-GUVb-Lo8d-D9WZ-1RY3Bx. Segmentation fault dmsetup remove --force sata device-mapper: table ioctl on sata failed: No such device or address device-mapper: reload ioctl on sata failed: No such device or address device-mapper: remove ioctl on sata failed: No such device or address Command failed Try also, vgreduce --removemissing, and other commands for delete ALL on SATA Volumen and start form 0. PVE volumen it's on production. Apreciate help

    Read the article

  • Can I lvreduce after lvextend without losing the ext4 partition inside it?

    - by DrSAR
    In a botched attempt to move my root partition from one disk to another I have done the following: added new disk partitioned it with parted (part #3 is now almost totally filling the disk) initialized a physical volume $ pvcreate /dev/sdb3 Physical volume "/dev/sdb3" successfully created extended the volume group to include this new physical disk $ vgextend myvg /dev/sdb3 Volume group "myvg" successfully extended extended the logical volume (I think this is where I ballsed it up: I think I should have pvmove'ed stuff to the new pv in that group - can someone confirm?) $ lvextend /dev/mapper/myvg-root /dev/sdb3 I would now like to undo the lvextend and then proceed with the original plan of moving the content of the old physical volume over to the new physical volume. Can I reduce the logical volume (I have not yet touched the ext4 partition that sits in /dev/mapper/myvg-root with something like resizefs) without fear of damaging the ext4 filesystem? If so, how do I tell it to reduce by exactly the right amount? $ lvreduce --by-exactly-the-amount-occupied-by-PV /ev/sdb3 /dev/mapper/myvg-root

    Read the article

  • How do I get 12.04 to recognize swap partition so that I can hibernate?

    - by Kayla
    I justed installed 12.04 and used gparted to erase and enlarge my swap partition. When I rebooted, gparted said that the file partition for the swap was unknown. Gparted doesn't let me change the file partition to "linux-swap". It does let me change it to NTFS, but when I reboot, it goes back to "unknown". Thanks in advance for your help. Output from sudo swapon -s: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 9025532 0 -1 Output from sudo fdisk -l: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9d63ac84 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2459647 1228800 7 HPFS/NTFS/exFAT /dev/sda2 2459648 197836472 97688412+ 7 HPFS/NTFS/exFAT /dev/sda3 466890752 488395119 10752184 7 HPFS/NTFS/exFAT /dev/sda4 197836798 466890751 134526977 5 Extended /dev/sda5 197836800 448837631 125500416 83 Linux /dev/sda6 448839680 466890751 9025536 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/mapper/cryptswap1: 9242 MB, 9242148864 bytes 255 heads, 63 sectors/track, 1123 cylinders, total 18051072 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x951b7f53 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table

    Read the article

  • KVM guest io is much slower than host io: is that normal?

    - by Evolver
    I have a Qemu-KVM host system setup on CentOS 6.3. Four 1TB SATA HDDs working in Software RAID10. Guest CentOS 6.3 is installed on separate LVM. People say that they see guest performance almost equal to host performance, but I don't see that. My i/o tests are showing 30-70% slower performance on guest than on host system. I tried to change scheduler (set elevator=deadline on host and elevator=noop on guest), set blkio.weight to 1000 in cgroup, change io to virtio... But none of these changes gave me any significant results. This is a guest .xml config part: <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/vgkvmnode/lv2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> There are my tests: Host system: iozone test # iozone -a -i0 -i1 -i2 -s8G -r64k random random KB reclen write rewrite read reread read write 8388608 64 189930 197436 266786 267254 28644 66642 dd read test: one process and then four simultaneous processes # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct 1073741824 bytes (1.1 GB) copied, 4.23044 s, 254 MB/s # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 14.4528 s, 74.3 MB/s 1073741824 bytes (1.1 GB) copied, 14.562 s, 73.7 MB/s 1073741824 bytes (1.1 GB) copied, 14.6341 s, 73.4 MB/s 1073741824 bytes (1.1 GB) copied, 14.7006 s, 73.0 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 6.2039 s, 173 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 32.7173 s, 32.8 MB/s 1073741824 bytes (1.1 GB) copied, 32.8868 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9097 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9688 s, 32.6 MB/s Guest system: iozone test # iozone -a -i0 -i1 -i2 -s512M -r64k random random KB reclen write rewrite read reread read write 524288 64 93374 154596 141193 149865 21394 46264 dd read test: one process and then four simultaneous processes # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 1073741824 bytes (1.1 GB) copied, 5.04356 s, 213 MB/s # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 24.7348 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7378 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7408 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.744 s, 43.4 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 10.415 s, 103 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 49.8874 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8608 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8693 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.9427 s, 21.5 MB/s I wonder is that normal situation or did I missed something?

    Read the article

  • Automapper -cannot resolve the Generic List

    - by chugh97
    Mapper.CreateMap<BusinessObject, Proxy.DataContacts.DCObject>() .ForMember(x => x.ExtensionData, y => y.Ignore()) .ForMember(z => z.ValidPlaces, a=> a.ResolveUsing(typeof(ValidPlaces))); Mapper.AssertConfigurationIsValid(); public class BusinessObject { public Enum1 Enum1 { get; set; } public List<ValidPlaces> ValidPlaces{ get; set; } } public class ValidPlaces { public int No { get; set; } public string Name { get; set; } } public class DCObject { [DataMember] public Enum1 Enum1 { get; set; } [DataMember] public List<ValidPlaces> ValidPlaces{ get; set; } } Mapper.CreateMap works find when Mapper.AssertConfigurationIsValid(); is called but when I actually call into the service the Automapper throws and excetion saying ValidPlaces could not be mapped.Works fine if I put Ignore() but ideally want that passed.Any AutoMapper experts out there pls help.

    Read the article

  • Starter question of declarative style SQLAlchemy relation()

    - by jfding
    I am quite new to SQLAlchemy, or even database programming, maybe my question is too simple. Now I have two class/table: class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String(40)) ... class Computer(Base): __tablename__ = 'comps' id = Column(Integer, primary_key=True) buyer_id = Column(None, ForeignKey('users.id')) user_id = Column(None, ForeignKey('users.id')) buyer = relation(User, backref=backref('buys', order_by=id)) user = relation(User, backref=backref('usings', order_by=id)) Of course, it cannot run. This is the backtrace: File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/state.py", line 71, in initialize_instance fn(self, instance, args, kwargs) File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 1829, in _event_on_init instrumenting_mapper.compile() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 687, in compile mapper._post_configure_properties() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/mapper.py", line 716, in _post_configure_properties prop.init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/interfaces.py", line 408, in init self.do_init() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 716, in do_init self._determine_joins() File "/Library/Python/2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/properties.py", line 806, in _determine_joins "many-to-many relation, 'secondaryjoin' is needed as well." % (self)) sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation Package.maintainer. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. There's two foreign keys in class Computer, so the relation() callings cannot determine which one should be used. I think I must use extra arguments to specify it, right? And howto? Thanks

    Read the article

  • Is there a way to use Linq projections with extension methods

    - by Acoustic
    I'm trying to use AutoMapper and a repository pattern along with a fluent interface, and running into difficulty with the Linq projection. For what it's worth, this code works fine when simply using in-memory objects. When using a database provider, however, it breaks when constructing the query graph. I've tried both SubSonic and Linq to SQL with the same result. Thanks for your ideas. Here's an extension method used in all scenarios - It's the source of the problem since everything works fine without using extension methods public static IQueryable<MyUser> ByName(this IQueryable<MyUser> users, string firstName) { return from u in users where u.FirstName == firstName select u; } Here's the in-memory code that works fine var userlist = new List<User> {new User{FirstName = "Test", LastName = "User"}}; Mapper.CreateMap<User, MyUser>(); var result = (from u in userlist select Mapper.Map<User, MyUser>(u)) .AsQueryable() .ByName("Test"); foreach (var x in result) { Console.WriteLine(x.FirstName); } Here's the same thing using a SubSonic (or Linq to SQL or whatever) that fails. This is what I'd like to make work somehow with extension methods... Mapper.CreateMap<User, MyUser>(); var result = from u in new DataClasses1DataContext().Users select Mapper.Map<User, MyUser>(u); var final = result.ByName("Test"); foreach(var x in final) // Fails here when the query graph built. { Console.WriteLine(x.FirstName); } The goal here is to avoid having to manually map the generated "User" object to the "MyUser" domain object- in other words, I'm trying to find a way to use AutoMapper so I don't have this kind of mapping code everywhere a database read operation is needed: var result = from u in new DataClasses1DataContext().Users select new MyUser // Can this be avoided with AutoMapper AND extension methods? { FirstName = v.FirstName, LastName = v.LastName };

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • Fedora 13 post security update boot problem

    - by Alex
    Hello. About a month ago I installed a security update that had new Kernek 2.6.34.x from 2.6.33.x), this is when the problem occurred for the first time. After the install my computer would not boot at all, black screen without any visible hard drive activity (I gave it good 30 minutes on black screen, before took actions)... I poped in installation DVD and went in rescue mode to change back the boot option to old kernel (was just a guess where the problem was). After restart computer loaded just file, took a long time for it to start because of SELinux targeted policy relabel is required. Relabeling could take very long time depending on file size. I assumed that the update got messed up somehow and continued working with modified boot option. Couple of days ago, there was another kernel update. I installed it and same problem as before. This rules out corrupted update theory... Black screen right after 'BIOS' screen before OS gets loaded. I had to rescue system again... Below is copy of my grub.conf file. I am fairly new to LINUX (couple of years of experience), mostly development and basic config... nothing crazy. # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_obalyuk-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=2 timeout=0 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.34.6-54.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-54.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-54.fc13.i686.PAE.img title Fedora (2.6.34.6-47.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.34.6-47.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.34.6-47.fc13.i686.PAE.img title Fedora (2.6.33.8-149.fc13.i686.PAE) root (hd0,0) kernel /vmlinuz-2.6.33.8-149.fc13.i686.PAE ro root=/dev/mapper/vg_obalyuk-lv_root rd_LVM_LV=vg_obalyuk/lv_root rd_LVM_LV=vg_obalyuk/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet initrd /initramfs-2.6.33.8-149.fc13.i686.PAE.img I like my system to be up to date... Let me know if I can post any other files that can be of help. Has anyone else had this problem? Does anyone has any ideas how to fix this problem? p.s. Anything helps, you ppl are great! thx for ur time.

    Read the article

  • xen-create-image does not create inird or initramfs image and domU does not starts with system image

    - by user219372
    I have Fedora 19 as Dom0. To create image I run # xen-create-image --hostname=debian-wheezy --memory=512Mb --dhcp --size=20Gb --swap=512Mb --dir=/xen --arch=amd64 --dist=wheezy After generation finished I start vm and see: # xl create /etc/xen/debian-wheezy.cfg Parsing config from /etc/xen/debian-wheezy.cfg libxl: error: libxl_dom.c:409:libxl__build_pv: xc_dom_ramdisk_file failed: No such file or directory libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3 In the /etc/xen/debian-wheezy.cfg i have # # Kernel + memory size # kernel = '/boot/vmlinuz-3.11.2-201.fc19.x86_64' ramdisk = '/boot/initrd.img-3.11.2-201.fc19.x86_64' and ls -1 /boot/*201* shows /boot/config-3.11.2-201.fc19.x86_64 /boot/initramfs-3.11.2-201.fc19.x86_64.img /boot/System.map-3.11.2-201.fc19.x86_64 /boot/vmlinuz-3.11.2-201.fc19.x86_64 Then if I fix ramdisk directive in .cfg file to /boot/initramfs-3.11.2-201.fc19.x86_64.img vm will start but os inside will not boot. In a tail of xl console I get [ OK ] Reached target Basic System. dracut-initqueue[130]: Warning: Could not boot. dracut-initqueue[130]: Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist dracut-initqueue[130]: Warning: /dev/fedora/root does not exist dracut-initqueue[130]: Warning: /dev/fedora/swap does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-root does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-swap does not exist dracut-initqueue[130]: Warning: /dev/xvda2 does not exist Starting Dracut Emergency Shell... Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist Warning: /dev/fedora/root does not exist Warning: /dev/fedora/swap does not exist Warning: /dev/mapper/fedora-root does not exist Warning: /dev/mapper/fedora-swap does not exist Warning: /dev/xvda2 does not exist Generating "/run/initramfs/sosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# .img files in /xen/domains/debian-wheezy exists and listed in disk section of debian-wheezy.cfg So what should i do? Update: I've found that xl does not mount images. In debian-wheezy.cfg I have that: root = '/dev/xvda2 ro' disk = [ 'file:/xen/domains/debian-wheezy/disk.img,xvda2,w', 'file:/xen/domains/debian-wheeze/swap.img,xvda1,w', ] And there is no /dev/xvda* or /dev/sda* or /dev/hda* files in VM.

    Read the article

  • Disk monitor script with long file systems

    - by DD.
    $ df -H Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_app001-lv_root 34G 12G 21G 35% / tmpfs 8.4G 0 8.4G 0% /dev/shm /dev/sda1 508M 54M 429M 12% /boot /dev/mapper/vg_app001-lv_home 19G 309M 17G 2% /home I want to run a disk monitor script but because the filesystem is so long the row has been split into two lines and the script fails. Any suggestions?

    Read the article

  • How can I Setup overloaded method invocations in Moq?

    - by arootbeer
    I'm trying to mock a mapping interface IMapper: public interface IMapper<TFoo, TBar> { TBar Map(TFoo foo); TFoo Map(TBar bar); } In my test, I'm setting the mock mapper up to expect an invocation of each (around an NHibernate update operation): //... _mapperMock.Setup(m => m.Map(fooMock.Object)).Returns(barMock.Object); _mapperMock.Setup(m => m.Map(barMock.Object)).Returns(fooMock.Object); //... However, when the second Map invocation is made, the mapper mock throws because it is only expecting a single invocation. Watching the mapper mock during setup at runtime, I can look see the Map(TFoo foo) overload get registered, and then see it get replaced when the Map(TBar bar) overload is set up. Is this a problem with the way Moq handles setup, or is there a different syntax I need to use in this case?

    Read the article

  • AutoMapper Problem - List won't Map

    - by Randy Minder
    I have the following class: public class Account { public int AccountID { get; set; } public Enterprise Enterprise { get; set; } public List<User> UserList { get; set; } } And I have the following method fragment: Entities.Account accountDto = new Entities.Account(); DAL.Entities.Account account; Mapper.CreateMap<DAL.Entities.Account, Entities.Account>(); Mapper.CreateMap<DAL.Entities.User, Entities.User>(); account = DAL.Account.GetByPrimaryKey(this.Database, primaryKey, withChildren); Mapper.Map(account,accountDto); return accountDto; When the method is called, the Account class gets mapped correctly but the list of users in the Account class does not (it is NULL). There are four User entities in the List that should get mapped. Could someone tell me what might be wrong?

    Read the article

  • MVVM/ViewModels and handling Authorization

    - by vdh_ant
    Hey guys Just wondering how how people handle Authorization when using MVVM and/or View Models. If I wasn't using VM's I would be passing back the Model and it would have a property which I could check if a user can edit a given object/property but when using MVVM I am disconnecting myself from the business object... and thus doen't know what the security should be any more. Is this a case where the mapper should be aware of the Authorization that is in place and don't copy across the data if the Authorization check fails. If this was the case I am guessing that the mapper would have to see some properties on the VM to let the interface know which fields are missing data because of the Authorization failure. If this does occur within the mapper, how does this fit in with things like AutoMapper, etc. Cheers Anthony

    Read the article

  • A PHP design pattern for the model part [PHP Zend Framework]

    - by Matthieu
    I have a PHP MVC application using Zend Framework. As presented in the quickstart, I use 3 layers for the model part : Model (business logic) Data mapper Table data gateway (or data access object, i.e. one class per SQL table) The model is UML designed and totally independent of the DB. My problem is : I can't have multiple instances of the same "instance/record". For example : if I get, for example, the user "Chuck Norris" with id=5, this will create a new model instance wich members will be filled by the data mapper (the data mapper query the table data gateway that query the DB). Then, if I change the name to "Duck Norras", don't save it in DB right away, and re-load the same user in another variable, I have "synchronisation" problems... (different instances for the same "record") Right now, I use the Multiton pattern : like Singleton, but multiple instances indexed by a key (wich is the user ID in our example). But this is complicating my developpement a lot, and my testings too. How to do it right ?

    Read the article

  • Is AutoMapper able to auto resolve types base on existing maps

    - by Chi Chan
    I have the following code: [SetUp] public void SetMeUp() { Mapper.CreateMap<SourceObject, DestinationObject>(); } [Test] public void Testing() { var source = new SourceObject {Id = 123}; var destination1 = Mapper.Map<SourceObject, DestinationObject>(source); var destination2 = Mapper.Map<ObjectBase, ObjectBase>(source); //Works Assert.That(destination1.Id == source.Id); //Fails, gives the same object back Assert.That(destination2 is DestinationObject); } public class ObjectBase { public int Id { get; set; } } public class SourceObject : ObjectBase { } public class DestinationObject : ObjectBase { } So basically, I want AutoMapper to automatically resolve the destination type to "DestinationObject" based on the existing Maps set up in AutoMapper. Is there a way to achieve this?

    Read the article

  • WCF Runtime Error while using Constructor

    - by Pranesh Nair
    Hi all, I am new to WCF i am using constructor in my WCF service.svc.cs file....It throws this error when i use the constructor The service type provided could not be loaded as a service because it does not have a default (parameter-less) constructor. To fix the problem, add a default constructor to the type, or pass an instance of the type to the host. When i remove the constructor its working fine....But its compulsory that i have to use constructor... This is my code namespace UserAuthentication { [ServiceBehavior(InstanceContextMode=System.ServiceModel.InstanceContextMode.Single)] public class UserAuthentication : UserRepository,IUserAuthentication { private ISqlMapper _mapper; private IRoleRepository _roleRepository; public UserAuthentication(ISqlMapper mapper): base(mapper) { _mapper = mapper; _roleRepository = new RoleRepository(_mapper); } public string EduvisionLogin(EduvisionUser aUser, int SchoolID) { UserRepository sampleCode= new UserRepository(_mapper); sampleCode.Login(aUser); return "Login Success"; } } } can anyone provide ideas or suggestions or sample code hw to resolve this issue...

    Read the article

  • kmeans based on mapreduce by python

    - by user3616059
    I am going to write a mapper and reducer for the kmeans algorithm, I think the best course of action to do is putting the distance calculator in mapper and sending to reducer with the cluster id as key and coordinates of row as value. In reducer, updating the centroids would be performed. I am writing this by python. As you know, I have to use Hadoop streaming to transfer data between STDIN and STOUT. according to my knowledge, when we print (key + "\t"+value), it will be sent to reducer. Reducer will receive data and it calculates the new centroids but when we print new centroids, I think it does not send them to mapper to calculate new clusters and it just send it to STDOUT and as you know, kmeans is a iterative program. So, my questions is whether Hadoop streaming suffers of doing iterative programs and we should employ MRJOB for iterative programs?

    Read the article

  • hadoop - large database query

    - by Mastergeek
    Situation: I have a Postgres DB that contains a table with several million rows and I'm trying to query all of those rows for a MapReduce job. From the research I've done on DBInputFormat, Hadoop might try and use the same query again for a new mapper and since these queries take a considerable amount of time I'd like to prevent this in one of two ways that I've thought up: 1) Limit the job to only run 1 mapper that queries the whole table and call it good. or 2) Somehow incorporate an offset in the query so that if Hadoop does try to use a new mapper it won't grab the same stuff. I feel like option (1) seems more promising, but I don't know if such a configuration is possible. Option(2) sounds nice in theory but I have no idea how I would keep track of the mappers being made and if it is at all possible to detect that and reconfigure. Help is appreciated and I'm namely looking for a way to pull all of the DB table data and not have several of the same query running because that would be a waste of time.

    Read the article

  • Error in mounting HDD

    - by Vikramjeet
    I am getting the following error whenever I mount my external HDD. It was working before and then I opted for safely removing the drive. Now its giving me following error Error mounting: mount exited with exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 8850 usa_count: 65535: Invalid argument Actual VCN (0x800006009000000) of index buffer is different from expected VCN (0x0). Failed to mount '/dev/sdb1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.

    Read the article

  • unable to mount internal disk mount exited with exit code 13

    - by Masri
    My Ubuntu get into error when I try to mount one of my internal disks and it gives this error message: Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 3). Failed to mount '/dev/sda7': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. pls advise how to solve above error ,Many thanks to you in advance.

    Read the article

  • Can't access Windos 7 OS in Ubuntu

    - by Myers
    When I try and Access my Windows 7 OS I get the following memo and I have no idea what to make of the code. Error mounting /dev/sda2 at /media/name/Windows7_OS: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda2" "/media/name/Windows7_OS"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. What should I do ?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >