Search Results

Search found 88714 results on 3549 pages for 'data type'.

Page 301/3549 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • No Data Received

    - by Ben Moore
    Out of the blue, around 40% of my website's community can no longer visit, saying they're getting "No Data Received" errors. We've taken our firewall offline, tried going through systems such as Cloudflare and checked our .htaccess to no avail. I've asked affected users to traceroute but the weird thing is it looks like it's being stopped at ISP level. Can anyone suggest other things that may be causing this error?

    Read the article

  • Data replication between two web nodes

    - by HTF
    I have Wordpress installation running on two web servers (Nginx). There is unidirectional synchronization from server A to server B and I'm using lsyncd for this purpose. with his configuration I have to add blog posts from the first web server so the data is replicated to the second one - how I can force access to Wordpress back-end only from the first web server? Please note that both servers have the same domain for Wordpress. Regards

    Read the article

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • Determine Configured Location of MySQL's data directory OR all loaded *.cfn Locations

    - by alanstorm
    I'm not a sys-admin, but sometimes I play one at work. I've inherited a virtual server that had MySQL installed from source. I'm gathering as much information about the install as I can (original people who installed it are, of course, not a resource). How can I find The default/current location of the MySQL binary files (often stored in a directory named data?) Any default or custom loaded cnf files? Looking for solutions that are a bit more sophisticated than a find / -iname '*.cnf' :)

    Read the article

  • How can I add metadata to NTFS files/folders?

    - by Pwdr
    I want to tag different file types (i.e. .pdf, .epub, .iso, .bin, folders,..) using the same descriptive fields. For example i would like a metadata field "type" which would be "eBook" on pdf- and epub-files, "CD-Image" on iso- and bin-files. I read about Alternate Data Streams (ADS) to make this possible. Does anyone know a good program for Windows 7 to tag different files and search for them? It is important for me, that the metadata is NOT stored in a separate database. I move the files a lot and need to stay flexible (ADSs 'stick' to the files). Any ideas?

    Read the article

  • mysqldump is not dumping my data

    - by oompahloompah
    I am running mysqldump on Ubuntu Linux (10.0.4 LTS) my mySQL version info is: mysql Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486) using readline 6.1 I used the following command: mysql -u username -p dbname dbname_backup.sql However when I opened the generated .sql file, I saw that most of the tables had only the schema dumped and in the few cases where the actual data was dumped, only 1 or two records were dumped (there are ATLEAST several tens of records in each table). Does anyone know what maybe going on?

    Read the article

  • Storing data, cost/gigabyte

    - by Micaela
    Can anyone give me a general estimate for what web-hosts charge for data storage ($/gigabyte)? A shared-webhosting service is what I'm referring to. I have been trying to compare the price for storage offered by business process automation SaaS and now I'm looking more general.

    Read the article

  • Why does a hard disk suddenly look to Windows as if it "needs to be formatted"?

    - by pufferfish
    This is more of a theory question, but what are the reason(s) for a disk to suddenly cause Windows to start saying it "needs to be formatted"? It happens to an IDE disk that I have in a cheap external enclosure, and I can usually get most of the data back by using software like recuva. It's now happened to an internal disk I have. I'm not looking for software to fix this (although links would be appreciated), but rather a low-level explanation as to what gets corrupted on the disk.

    Read the article

  • Data Sources (ODBC) hangs when trying to create a new database connection

    - by FredrikD
    When I try to create a new database connection, the Data Sources (ODBC) programs hangs or takes a very long time to find the list of available SQL Servers. This only happens when there are other computers on the network, when my machine (a standard Windows 7 laptop) is alone, it works just fine. My question is: What should I look for in terms of SQL server or ODBC configurations that will take away this random behaviour?

    Read the article

  • gunzip: invalid compressed data--format violated

    - by Arunjith
    Problem definition: I transferred a tar.gz file from a Linux machine to a Windows partition.The Windows partition has mounted with the Linux server as cifs. OS : Red Hat Enterprise Linux Server release 5 Symptom: After the copy process is successful, doing an integrity check with gunzip -t and the process get the following error: gunzip -t Backup-28--Jun--2011--Tuesday.tar.gz gunzip: Backup-28--Jun--2011--Tuesday.tar.gz: invalid compressed data--format violated And further tried to untar (tar -xvzf) and the process as well is failed.

    Read the article

  • Hard Disk recovery

    - by Shaihi
    I have 3 disks of the same type model and year of production. All the disks were used part of a generic solution of an IBM server solution. My problem is that all 3 disks suffered the same malfunction at the same exact time and are now dis-functional. I went to two different expert's laboratories and got the same answer: To recover the data they need another identical disk from which they can take spare parts. Can my case really be that clinical? Anyway, I am not sure if this question belongs to this forum, but I am looking to buy the following disk: IIBM ESERVER XSERIES IBM P/N 24P3707 IBM FRU 24P3708 146.8GB USCSI 10K RPM PART NOMBER 9V2005-027 I already bought a disk with the same part number, but the labs said that apparently I need a disk that was manufactured in the same factory. That means that all the numbers have to be exactly the same. If anybody know where I can purchase such a disk (the information on the lost disks is really important to me), please tell me the place.

    Read the article

  • Preventing users from deleting SQL data

    - by me2011
    We just purchased a program that requires the users to have an account in the MS SQL server, with read/write access to the program's database. My concern is that since these users will now have write access to the database, they could directly connect to the SQL server outside of the program's client and then mess with the data directly in the tables. Is there anyway I can prevent access to the database while still allowing access via the client program?

    Read the article

  • Hosting solution for sensitive client data

    - by Mark
    Hello, We are developing a web application that will deal with highly sensitive (financial) data of clients (audience is medium to large sized businesses). Clients will be under scrutiny from regulators & auditors and, as such, we will be too. More importantly to give clients a level of comfort our application and related hosting arrangement should instill a lot of confidence with them. We are looking into using a cloud based service like Linode, Amazon EC2, etc. To allow for maximum flexibility We are keen on putting everything on virtual servers and avoiding having to buy our own hardware. Does a cloud based service make sense for our particular scenario? If not what type of hosting should we consider? If so what should we look out for? Thanks!

    Read the article

  • Can't recover hard drive

    - by BreezyChick89
    My drive got corrupt after a thunderstorm. It used to be 1 partition of 2.5tb but now it shows 2 partitions. It's weird because 300gig free space is about how much it had before corrupting, but it was part of the first partition. I tried $ sudo resize2fs -f /dev/sdb1 Resizing the filesystem on /dev/sdb1 to 536870911 (4k) blocks. resize2fs: Can't read an block bitmap while trying to resize /dev/sdb1 Please run 'e2fsck -fy /dev/sdb1' to fix the filesystem after the aborted resize operation. sudo e2fsck -f /dev/sdb1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! Abort? n .... Error reading block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes Force rewrite<y>? yes Error writing block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes ... A lot of these. I can't use e2fsck -y because the first question aborts if I say "y". If I put a weight on the 'y' key it fails because none of the errors were really fixed. I asked this question before and tried using gparted but gparted fails because the first thing it does is: e2fsck -f -y -v /dev/sdb1 giving the same error. The disk status says healthy. There are no bad blocks. This is very frustrating because I can see the data in testdisk and it looks like it's all there. I already bought another 2.5tb drive and made a clone using dd. The next step if I can't fix this is to wipe that drive and just move the data with testdisk, but it seems certain folders will copy infinitely until the drive is full because of symlinks or errors so it's also a difficult option. sudo fdisk -l Disk /dev/sdb: 2500.5 GB, 2500495958016 bytes 255 heads, 63 sectors/track, 304001 cylinders, total 4883781168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005da5e Device Boot Start End Blocks Id System /dev/sdb1 * 2048 4294969342 2147483647+ 83 Linux sudo badblocks -b 4096 -n -o badfile /dev/sdb 610471680 536870911 badfile is empty I also tried changing the superblock with "fsck -b" but all of them are the same.

    Read the article

  • Mac: Resize windows partition w/o destroying data?

    - by jbehren
    Is there a method/utility to actively resize the partitions on a dual-boot macbook air, without destroying the contents? I made the Windows Partition too small initially, and all the places I've looked have stated that resizing now using bootcamp will destroy all data on the Win7 Partition. I would prefer free, but I'm open to a reasonably priced utility that can grow the Win7 partition into the available space (I can use bootcamp to shrink the OSX partition without any problems).

    Read the article

  • Three disk (possibly RAID) data recovery

    - by Martin
    I have on my desk three 160 GB disks that were once part of an HP Proliant Windows 2003 Server. They may have been part of a RAID configuration of some sort. They may or may not be damaged in some way. When I interface them via USB, one of them shows up as a drive, but unformatted. The others show up as uninitialized disks in manager. An alternative explanation is that the two drives were simply not unused. What's my first step? I've recovered data off damaged drives before but never had anything to do with RAID configs. How can I even tell if any type of RAID was used?

    Read the article

  • How to Export/Transfer DHCP data ?

    - by sreevatsa
    We have a very old server HP ML110 its giving hardware ( Power )trouble and we are hosting DHCP services on this on windows 2000 . Now i would like to transfer all the DHCP data ( it has reserved IP ) from this old server to a new server which is win2003 . How do i do ?

    Read the article

  • Moving Data from One Column into Six Columns

    - by Alex Rudd
    I have an Excel sheet that has six columns that are currently all combined into one column. I need to separate them out but the issue is the first column is words that sometimes are one word and sometimes two. Here is an example: Twin 70 442 186 310 221 Twin Futon 70 389 160 272 195 XL twin 70 463 196 324 231 XL Twin Futon 70 418 174 293 209 Double 100 590 245 413 295 How can I separate these data sets while keeping the words all in the same columns?

    Read the article

  • NHibernate Conventions

    - by Ricardo Peres
    Introduction It seems that nowadays everyone loves conventions! Not the ones that you go to, but the ones that you use, that is! It just happens that NHibernate also supports conventions, and we’ll see exactly how. Conventions in NHibernate are supported in two ways: Naming of tables and columns when not explicitly indicated in the mappings; Full domain mapping. Naming of Tables and Columns Since always NHibernate has supported the concept of a naming strategy. A naming strategy in NHibernate converts class and property names to table and column names and vice-versa, when a name is not explicitly supplied. In concrete, it must be a realization of the NHibernate.Cfg.INamingStrategy interface, of which NHibernate includes two implementations: DefaultNamingStrategy: the default implementation, where each column and table are mapped to identically named properties and classes, for example, “MyEntity” will translate to “MyEntity”; ImprovedNamingStrategy: underscores (_) are used to separate Pascal-cased fragments, for example, entity “MyEntity” will be mapped to a “my_entity” table. The naming strategy can be defined at configuration level (the Configuration instance) by calling the SetNamingStrategy method: 1: cfg.SetNamingStrategy(ImprovedNamingStrategy.Instance); Both the DefaultNamingStrategy and the ImprovedNamingStrategy classes offer singleton instances in the form of Instance static fields. DefaultNamingStrategy is the one NHibernate uses, if you don’t specify one. Domain Mapping In mapping by code, we have the choice of relying on conventions to do the mapping automatically. This means a class will inspect our classes and decide how they will relate to the database objects. The class that handles conventions is NHibernate.Mapping.ByCode.ConventionModelMapper, a specialization of the base by code mapper, NHibernate.Mapping.ByCode.ModelMapper. The ModelMapper relies on an internal SimpleModelInspector to help it decide what and how to map, but the mapper lets you override its decisions.  You apply code conventions like this: 1: //pick the types that you want to map 2: IEnumerable<Type> types = Assembly.GetExecutingAssembly().GetExportedTypes(); 3:  4: //conventions based mapper 5: ConventionModelMapper mapper = new ConventionModelMapper(); 6:  7: HbmMapping mapping = mapper.CompileMappingFor(types); 8:  9: //the one and only configuration instance 10: Configuration cfg = ...; 11: cfg.AddMapping(mapping); This is a very simple example, it lacks, at least, the id generation strategy, which you can add by adding an event handler like this: 1: mapper.BeforeMapClass += (IModelInspector modelInspector, Type type, IClassAttributesMapper classCustomizer) => 2: { 3: classCustomizer.Id(x => 4: { 5: //set the hilo generator 6: x.Generator(Generators.HighLow); 7: }); 8: }; The mapper will fire events like this whenever it needs to get information about what to do. And basically this is all it takes to automatically map your domain! It will correctly configure many-to-one and one-to-many relations, choosing bags or sets depending on your collections, will get the table and column names from the naming strategy we saw earlier and will apply the usual defaults to all properties, such as laziness and fetch mode. However, there is at least one thing missing: many-to-many relations. The conventional mapper doesn’t know how to find and configure them, which is a pity, but, alas, not difficult to overcome. To start, for my projects, I have this rule: each entity exposes a public property of type ISet<T> where T is, of course, the type of the other endpoint entity. Extensible as it is, NHibernate lets me implement this very easily: 1: mapper.IsOneToMany((MemberInfo member, Boolean isLikely) => 2: { 3: Type sourceType = member.DeclaringType; 4: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 5:  6: //check if the property is of a generic collection type 7: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 8: { 9: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 10:  11: //check if the type of the generic collection property is an entity 12: if (mapper.ModelInspector.IsEntity(destinationEntityType) == true) 13: { 14: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 15: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 16:  17: if (collectionInDestinationType != null) 18: { 19: return (false); 20: } 21: } 22: } 23:  24: return (true); 25: }); 26:  27: mapper.IsManyToMany((MemberInfo member, Boolean isLikely) => 28: { 29: //a relation is many to many if it isn't one to many 30: Boolean isOneToMany = mapper.ModelInspector.IsOneToMany(member); 31: return (!isOneToMany); 32: }); 33:  34: mapper.BeforeMapManyToMany += (IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) => 35: { 36: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 37: //set the mapping table column names from each source entity name plus the _Id sufix 38: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 39: }; 40:  41: mapper.BeforeMapSet += (IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) => 42: { 43: if (modelInspector.IsManyToMany(member.LocalMember) == true) 44: { 45: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 46:  47: Type sourceType = member.LocalMember.DeclaringType; 48: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 49: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 50:  51: //set inverse on the relation of the alphabetically first entity name 52: propertyCustomizer.Inverse(sourceType.Name == names.First()); 53: //set mapping table name from the entity names in alphabetical order 54: propertyCustomizer.Table(String.Join("_", names)); 55: } 56: }; We have to understand how the conventions mapper thinks: For each collection of entities found, it will ask the mapper if it is a one-to-many; in our case, if the collection is a generic one that has an entity as its generic parameter, and the generic parameter type has a similar collection, then it is not a one-to-many; Next, the mapper will ask if the collection that it now knows is not a one-to-many is a many-to-many; Before a set is mapped, if it corresponds to a many-to-many, we set its mapping table. Now, this is tricky: because we have no way to maintain state, we sort the names of the two endpoint entities and we combine them with a “_”; for the first alphabetical entity, we set its relation to inverse – remember, on a many-to-many relation, only one endpoint must be marked as inverse; finally, we set the column name as the name of the entity with an “_Id” suffix; Before the many-to-many relation is processed, we set the column name as the name of the other endpoint entity with the “_Id” suffix, as we did for the set. And that’s it. With these rules, NHibernate will now happily find and configure many-to-many relations, as well as all the others. You can wrap this in a new conventions mapper class, so that it is more easily reusable: 1: public class ManyToManyConventionModelMapper : ConventionModelMapper 2: { 3: public ManyToManyConventionModelMapper() 4: { 5: base.IsOneToMany((MemberInfo member, Boolean isLikely) => 6: { 7: return (this.IsOneToMany(member, isLikely)); 8: }); 9:  10: base.IsManyToMany((MemberInfo member, Boolean isLikely) => 11: { 12: return (this.IsManyToMany(member, isLikely)); 13: }); 14:  15: base.BeforeMapManyToMany += this.BeforeMapManyToMany; 16: base.BeforeMapSet += this.BeforeMapSet; 17: } 18:  19: protected virtual Boolean IsManyToMany(MemberInfo member, Boolean isLikely) 20: { 21: //a relation is many to many if it isn't one to many 22: Boolean isOneToMany = this.ModelInspector.IsOneToMany(member); 23: return (!isOneToMany); 24: } 25:  26: protected virtual Boolean IsOneToMany(MemberInfo member, Boolean isLikely) 27: { 28: Type sourceType = member.DeclaringType; 29: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 30:  31: //check if the property is of a generic collection type 32: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 33: { 34: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 35:  36: //check if the type of the generic collection property is an entity 37: if (this.ModelInspector.IsEntity(destinationEntityType) == true) 38: { 39: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 40: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 41:  42: if (collectionInDestinationType != null) 43: { 44: return (false); 45: } 46: } 47: } 48:  49: return (true); 50: } 51:  52: protected virtual new void BeforeMapManyToMany(IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) 53: { 54: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 55: //set the mapping table column names from each source entity name plus the _Id sufix 56: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 57: } 58:  59: protected virtual new void BeforeMapSet(IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) 60: { 61: if (modelInspector.IsManyToMany(member.LocalMember) == true) 62: { 63: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 64:  65: Type sourceType = member.LocalMember.DeclaringType; 66: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 67: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 68:  69: //set inverse on the relation of the alphabetically first entity name 70: propertyCustomizer.Inverse(sourceType.Name == names.First()); 71: //set mapping table name from the entity names in alphabetical order 72: propertyCustomizer.Table(String.Join("_", names)); 73: } 74: } 75: } Conclusion Of course, there is much more to mapping than this, I suggest you look at all the events and functions offered by the ModelMapper to see where you can hook for making it behave the way you want. If you need any help, just let me know!

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >