Search Results

Search found 29619 results on 1185 pages for 'android virtual device'.

Page 564/1185 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • MVC view engine that can render classic ASP?

    - by David Lively
    I'm about to start integrating ASP.NET MVC into an existing 200k+ line classic ASP application. Rewriting the existing code (which runs an entire business, not just the public-facing website) is not an option. The existing ASP 3.0 pages are quite nicely structured (given what was available when this model was put into place nearly 15 years ago), like so: <!--#include virtual="/_lib/user.asp"--> <!--#include virtual="/_lib/db.asp"--> <!--#include virtual="/_lib/stats.asp"--> <!-- #include virtual="/_template/topbar.asp"--> <!-- #include virtual="/_template/lftbar.asp"--> <h1>page name</h1> <p>some content goes here</p> <p>some content goes here</p> <p>.....</p> <h2>Today's Top Forum Posts</h2> <%=forum_top_posts(1)%> <!--#include virtual="/_template/footer.asp"--> ... or somesuch. Some pages will include function and sub definitions, but for the most part these exist in include files. I'm wondering how difficult it would be to write an MVC view engine that could render classic ASP, preferably using the existing ASP.DLL. In that case, I could replace <!--#include virtual="/_lib/user.asp"--> with <%= Html.RenderPartial("lib/user"); %> as I gradually move existing pages from ASP to ASP.NET. Since the existing formatting and library includes are used very extensively, they must be maintained. I'd like to avoid having to maintain two separate versions. I know the ideal way to go about this would be to simply start a new app and say to hell with the ASP code, however, that simply ain't gonna happen. Ideas?

    Read the article

  • NHibernate: Mapping multiple classes from a single table row

    - by Michael Kurtz
    I couldn't find an answer to this specific question. I am trying to keep my domain model object-oriented and re-use objects where possible. I am having an issue determining how to provide a mapping to multiple classes from a single row. Let me explain with an example: I have a single table, call it Customer. A customer has several attributes; but, for brevity, assume it has Id, Name, Address, City, State, ZipCode. I would like to create a Customer and Address class that look like this: public class Customer { public virtual long Id {get;set;} public virtual string Name {get;set;} public virtual Address Address {get;set;} } public class Address { public virtual string Address {get;set;} public virtual string City {get;set;} public virtual string State {get;set;} public virtual string ZipCode {get;set;} } What I am having trouble with is determining what the mapping would be for the Address class within the Customer class. There is no Address table and there isn't a "set" of addresses associated with a Customer. I just want a more object-oriented view of the Customer table in code. There are several other tables that have address information in them and it would be nice to have a reusable Address class to deal with them. Addresses are not shared so breaking all addresses into a separate table with foreign keys seems to be overkill and, actually, more painful since I would need foreign keys to multiple tables. Can someone enlighten me on this type of mapping? Please provide an example if you can. Thanks for any insights! -Mike

    Read the article

  • iPhone 4.0 on iPhone but still Ad-Hoc compile for 3.1.3?

    - by Mark
    Device: Version : 3.1 Build: 3511 Device: iPhone OS: iPhone OS 4.0 xCode 3.2.2 (Old) xCode 3.2.3 (New; For iPhone 4.0 Beta) Background: As you can see I installed 4.0 on my iPhone as I read on this forum it's really hard to near impossible to downgrade back to 3.1.3, but it's my only device I have and use for development. When I try to continue to develop and build with the old xCode it tells me that "No provisioned iPhone OS device is connected". When I select Simulator it does compile and build, however when I spread this file it does not work on the devices of my testers, they get a Signed error. When I run the new xCode, it does compile and build on the Device and when I spread this file, it does work on the devices of my testers (which are running the current official version 3.1.3). Questions: Why is there a difference between building for Simulator and Device? A simulator build never seem to work on the devices of my testers because of signing issues and the build for device does work. Currently it seems the old xCode became useless, however I read that you may not use the Beta xCode to build your application for release. So knowing the above how am I able to pull this off with my current setup due the fact the old xCode won't let me build properly.

    Read the article

  • GoDaddy Subdomain Hosting Issue/Question with Disk Access (C#/ASP.NET 3.5)

    - by Vogel
    This isn't a very complicated scenario really, but as I start to type out the problem I'm realizing how convoluted it can become textually. Let me try and be very clear: First, the set up... I have a C#/ASP.NET web application that is publicly facing on my main domain (www), let's call it www.mysite.com. Nothing fancy, just a front-end that connects to SQL to display records. Then, I have a second C#/ASP.NET web application that is secured using forms authentication running on a subdomain, let's call it admin.mysite.com. This is a very light-weight CMS system to administer the public site. Now, the problem... Both of these sites run fine for basic tasks, however, my problem arises when I try to gain access to the file system for uploading. GoDaddy requires subdomains to run as a virtual directories under the main application in IIS (so the subdomains actually resolve/re-direct to www.mysite.com/admin when you type in admin.mysite.com), but because of this I am unable to write to my website root from the subfolder. Let me explain a little more... The CMS system (running as a virtual directory) gives the admin the ability to upload photos for display on the main site, the target folder of which is www.mysite.com/images - when attempting disk access from the root app, I am able to write to the virtual directory, but cannot do the opposite -- that is, write to the root from the virtual directory, getting security violations. If I can only upload to the /admin/ virtual directory, the entire point is moot because it's a secured folder that the public can't see! The only solution I can think of is to upload the files to the /admin/ virtual directory, then call a URL in the root that moves files from /admin/ back to the root, but that is entirely ghetto. I hope this post makes sense. Anyone else experience anything like this? The bottom line is that it seems virtual directories ONLY have access to themselves, and not their parent directories, no matter what credentials are used. Thanks!

    Read the article

  • Modelling problem - Networked devices with commands

    - by Schneider
    I encountered a head scratching modelling problem today: We are modelling a physical control system composed of Devices and NetworkDevices. Any example of a Device is a TV. An example of a NetworkDevice is an IR transceiver with Ethernet connection. As you can see, to be able to control the TV over the internet we must connect the Device to the NetworkDevice. There is a one to many relationship between Device and NetworkDevice i.e. TV only has one NetworkDevice (the IR transceiver), but the IR transceiver may control many Devices (e.g. many TVs). So far no problem. The complicated bit is that every Device has a collection of Commands. The type of the Command (e.g IrCommand, SerialCommand - N.B. not currently modelled) depends on the type of NetworkDevice that the Device is connected to. In the current legacy system the Device has a collection of generic Commands (no typing) where fields are "interpreted" depending on the NetworkDevice type. How do I go about modelling this in OOP such that: You can only ever add a Command of the appropriate type, given the NetworkDevice the Device is attached to? If I change the NetworkDevice the Commands collection changes to the appropriate type Make it so the API is simple/elegant/intuitive to use

    Read the article

  • Fluent NHibernate automap a HasManyToMany using a generic type

    - by zulkamal
    I have a bunch of domain entities that can be keyword tagged (a Tag is also an entity.) I want to do a normal many-to-many (Tag - TagReview <- Review) table relationship but I don't want to have to create a new concrete relationship on both the Entity and Tag every single time I add a new entity. I was hoping to do a generic based Tag and do this: // Tag public class Tag<T> { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual IList<T> Entities { get; set; } public Tag() { Entities = new List<T>(); } } // Review public class Review { public virtual string Id { get; private set; } public virtual string Title { get; set; } public virtual string Content { get; set; } public virtual IList<Tag<Review>> Tags { get; set; } public Review() { Tags = new List<Tag<Review>>(); } } Unfortunately I get an exception: ----> System.ArgumentException : Cannot create an instance of FluentNHibernate.Automapping.AutoMapping`1[Example.Entities.Tag`1[T]] because Type.ContainsGenericParameters is true. I anticipate there will be maybe 5-10 entities so mapping normally would be ok but is there a way to do something like this?

    Read the article

  • EF 6 Code First Many to many With Payload and self referencing many to many

    - by lesley86
    I Have the problem where i have a many to many relationship and on one of the tables there will be a self referencing many to many. So basically a school have zero or many groups and many groups can have 0 or many schools. The groups table will contain a parent child many to many with itself because a group can be a child of another group or it can have no children and that child can have a child, one child can also have many parents or a entity can have no parents. I created a mapping table with Payload to solvethe first many to many problem. code snippet public class School { public virtual ICollection<SchoolGroupMap> SchoolGroupMaps } public class SchoolGroup { public virtual ICollection<SchoolGroupMap> SchoolGroupMaps } public class SchoolGroupMap { public virtual School School public virtual SchoolGroup SchoolGroup } i Then tried modifying the code the following way for the the self referencing many to many public class SchoolGroup { public virtual ICollection<SchoolGroupMap> SchoolGroupMaps public virtual ICollection<SchoolGroup> Parents public virtual ICollection<SchoolGroup> Children } I changed the context with has many and an auto mapping table (forgive me i have been trying so many things today i do not have the exact code). I received an error the properties on the classes must match. Can anyone help please. I want to do create navigation properties on the self referencing many to many. Also a seed example would be appreciated regards

    Read the article

  • JMF microphone volume controller

    - by TacB0sS
    How to obtain the Microphone volume controller in JMF? this is what I have: I tried this implementation concept of yours, but I keep getting a null from the first volume processor when I try to get the stream, here is how I do it: // the device is the media device specifically audio Processor processorForVolume = Manager.createProcessor(device.getLocator()); // wait until configured ProcessorStates newState = new ProcessorStateListener(Processor.Configured).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // setting the content descriptor to null - read in another thread this allows to get the gain control processorForVolume.setContentDescriptor(null); // set the track control format to one supported by the device and the track control. // I didn't match it to an RTP allowed format, but I don't think this has anything to do with it... TrackControl[] trackControls = processorForVolume.getTrackControls(); if (trackControls.length == 0) throw new MC_Exception("No track controls where found for this device:", new Object[]{device}); for (TrackControl control : trackControls) trackManipulator.manipulateTrackControls(control); // wait until the processor is realized newState = new ProcessorStateListener(Controller.Realized).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // receives the gain control micVolumeController = processorForVolume.getGainControl(); // cannot get the output stream to process further... any suggestions? processor = Manager.createProcessor(processorForVolume.getDataOutput()); new ProcessorStateListener(Processor.Configured).waitForProcessorState(processor); processor.setContentDescriptor(DeviceCapturingManager.RAW_RTP); new ProcessorStateListener(Controller.Realized).waitForProcessorState(processor); this is the output It generates: volumeProcessorState: Configured format set to track control - com.sun.media.ProcessEngine$ProcTControl@1627c16: LINEAR, 48000.0 Hz, 16-bit, Stereo, LittleEndian, Signed volumeProcessorState: Realized and the data output from the processor is Null. I should make clear that when the content descriptor != null I do get an output stream but not the volume controller, and the when it is null I get the controller, but no stream. I try to connect to an audio microphone device Adam.

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • WinUSB failing on non-development computers

    - by Giawa
    Good afternoon, WinUSB is working well on the development computer that I am using (Win XP SP3). I am able to download new firmware to the Cypress FX2, and then connect to the new USB device once it 'renumerates'. However, if I've tried the same code with the WinUSB driver on a few other computers (Win XP SP3, Win7 x64) and they both returned the error "A device attached to the system is not functioning." when trying to use CreateFile to get a handle to the USB device. The devicePath was found successfully, so I'm not sure why it cannot connect to the device. Furthermore, the device manager states that my device is working properly. I'm curious if I'm missing something when compiling the code? I would guess that my development computer has something installed on it that the other computers do not? Or perhaps it's a power setting and the device is going to sleep (although I've fooled around with the Power Options on each computer to no avail). Does anyone have any ideas? I've compiled under Visual Studio 2008, and have installed the Microsoft C++ 2008 Redistributable Package on the computers that I've tested on. Thanks, Giawa

    Read the article

  • Simple Fluent NHibernate Mapping Problem

    - by user500038
    I have the following tables I need to map: +-------------------------+ | Person | +-------------------------+ | PersonId | | FullName | +-------------------------+ +-------------------------+ | PersonAddress | +-------------------------+ | PersonId | | AddressId | | IsDefault | +-------------------------+ +-------------------------+ | Address | +-------------------------+ | AddressId | | State | +-------------------------+ And the following classes: public class Person { public virtual int Id { get; set; } public virtual string FullName { get; set; } } public class PersonAddress { public virtual Person Person { get; set; } public virtual Address Address { get; set; } public virtual bool IsDefault { get; set; } } public class Address { public virtual int Id { get; set; } public virtual string State { get; set; } } And finally the mappings: public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id, "PersonId"); } } public class PersonAddressMap : ClassMap<PersonAddress> { public PersonAddressMap() { CompositeId().KeyProperty(x => x.Person, "PersonID") .KeyProperty(x => x.Address, "AddressID"); } } public class AddressMap: ClassMap<Address> { public AddressMap() { Id(x => x.Id, "AddressId"); } } Assume I cannot alter the tables. If I take the mapping class (PersonAddress) out of the equation, everything works fine. If I put it back in I get: Could not determine type for: Person, Person, Version=1.0, Culture=neutral, PublicKeyToken=null, for columns: NHibernate.Mapping.Column(PersonId) What am I missing here? I'm sure this must be simple.

    Read the article

  • variables in abstract classes C++

    - by wyatt
    I have an abstract class CommandPath, and a number of derived classes as below: class CommandPath { public: virtual CommandResponse handleCommand(std::string) = 0; virtual CommandResponse execute() = 0; virtual ~CommandPath() {} }; class GetTimeCommandPath : public CommandPath { int stage; public: GetTimeCommandPath() : stage(0) {} CommandResponse handleCommand(std::string); CommandResponse execute(); }; All of the derived classes have the member variable 'stage'. I want to build a function into all of them which manipulates 'stage' in the same way, so rather than defining it many times I thought I'd build it into the parent class. I moved 'stage' from the private sections of all of the derived classes into the protected section of CommandPath, and added the function as follows: class CommandPath { protected: int stage; public: virtual CommandResponse handleCommand(std::string) = 0; virtual CommandResponse execute() = 0; std::string confirmCommand(std::string, int, int, std::string, std::string); virtual ~CommandPath() {} }; class GetTimeCommandPath : public CommandPath { public: GetTimeCommandPath() : stage(0) {} CommandResponse handleCommand(std::string); CommandResponse execute(); }; Now my compiler tells me for the constructor lines that none of the derived classes have a member 'stage'. I was under the impression that protected members are visible to derived classes? The constructor is the same in all classes, so I suppose I could move it to the parent class, but I'm more concerned about finding out why the derived classes aren't able to access the variable. Also, since previously I've only used the parent class for pure virtual functions, I wanted to confirm that this is the way to go about adding a function to be inherited by all derived classes.

    Read the article

  • C# Hook Forms / Windows / Dialogs etc. (via HWND?) to Capture Video Buffer (D3D Device?)

    - by Drax
    I am looking to create a very simple C# application which runs Full-Screen in Direct3D, and is able to grab the Desktop 'scene', mapping each Window from the Desktop to a Textured Polygon in my D3D Scene... I'm hoping to create a simplistic "3D Desktop" type of application as an experiment, and I'm wondering if there is a specific method for doing something like the following: 1)Get a list of all the Windows open on the Desktop (List of HWNDs?). 2)Grab the X,Y position of each Window, as well as the Width and Height. 3)Grab the Rendered image of each Window (magic happens here). 4)Create a new Texture/Surface in D3D using the Width and Height of the Window(s), and apply the Image we grabbed as a Texture. Is there an efficient 'best practice' for acquiring the actual image(s) being rendered to the Desktop? Is there also a 'best practice' for "extending the desktop" to a virtual second, third, etc. "desktop" and being able to swap between them, including creating a unique instance of the task-bar for each virtual desktop. Thanks a million for any suggestions!

    Read the article

  • How do I access abstract private data from derived class without friend or 'getter' functions in C++?

    - by John
    So, I am caught up in a dilemma right now. How am I suppose to access a pure abstract base class private member variable from a derived class? I have heard from a friend that it is possible to access it through the base constructor, but he didn't explain. How is it possible? There are some inherited classes from base class. Is there any way to gain access to the private variables ? class Base_button { private: bool is_vis; Rect rButton; public: // Constructors Base_button(); Base_button( const Point &corner, double height, double width ); // Destructor virtual ~ Base_button(); // Accessors virtual void draw() const = 0; bool clicked( const Point &click ) const; bool is_visible() const; // Mutators virtual void show(); virtual void hide(); void move( const Point &loc ); }; class Button : public Base_button { private: Message mButton; public: // Constructors Button(); Button( const Point &corner, const string &label ); // Acessors virtual void draw() const; // Mutators virtual void show(); virtual void hide(); }; I want to be able access Rect and bool in the base class from the subclass

    Read the article

  • Using abstract base to implement private parts of a template class?

    - by StackedCrooked
    When using templates to implement mix-ins (as an alternative to multiple inheritance) there is the problem that all code must be in the header file. I'm thinking of using an abstract base class to get around that problem. Here's a code sample: class Widget { public: virtual ~Widget() {} }; // Abstract base class allows to put code in .cpp file. class AbstractDrawable { public: virtual ~AbstractDrawable() = 0; virtual void draw(); virtual int getMinimumSize() const; }; // Drawable mix-in template<class T> class Drawable : public T, public AbstractDrawable { public: virtual ~Drawable() {} virtual void draw() { AbstractDrawable::draw(); } virtual int getMinimumSize() const { return AbstractDrawable::getMinimumSize(); } }; class Image : public Drawable< Widget > { }; int main() { Image i; i.draw(); return 0; } Has anyone walked that road before? Are there any pitfalls that I should be aware of?

    Read the article

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • How to stop registration attempts on Asterisk

    - by Travesty3
    The main question: My Asterisk logs are littered with messages like these: [2012-05-29 15:53:49] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:50] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:55] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:55] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:57] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device <sip:[email protected]>;tag=cb23fe53 [2012-05-29 15:53:57] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device <sip:[email protected]>;tag=cb23fe53 [2012-05-29 15:54:02] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:54:03] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 21:20:36] NOTICE[5578] chan_sip.c: Registration from '"55435217"<sip:[email protected]>' failed for '65.218.221.180' - No matching peer found [2012-05-29 21:20:36] NOTICE[5578] chan_sip.c: Registration from '"1731687005"<sip:[email protected]>' failed for '65.218.221.180' - No matching peer found [2012-05-30 01:18:58] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=dEBcOzUysX [2012-05-30 01:18:58] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=9zUari4Mve [2012-05-30 01:19:00] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=sOYgI1ItQn [2012-05-30 01:19:02] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=2EGLTzZSEi [2012-05-30 01:19:04] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=j0JfZoPcur [2012-05-30 01:19:06] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=Ra0DFDKggt [2012-05-30 01:19:08] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=rR7q7aTHEz [2012-05-30 01:19:10] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=VHUMtOpIvU [2012-05-30 01:19:12] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=JxZUzBnPMW I use Asterisk for an automated phone system. The only thing it does is receives incoming calls and executes a Perl script. No outgoing calls, no incoming calls to an actual phone, no phones registered with Asterisk. It seems like there should be an easy way to block all unauthorized registration attempts, but I have struggled with this for a long time. It seems like there should be a more effective way to prevent these attempts from even getting far enough to reach my Asterisk logs. Some setting I could turn on/off that doesn't allow registration attempts at all or something. Is there any way to do this? Also, am I correct in assuming that the "Registration from ..." messages are likely people attempting to get access to my Asterisk server (probably to make calls on my account)? And what's the difference between those messages and the "Sending fake auth rejection ..." messages? Further detail: I know that the "Registration from ..." lines are intruders attempting to get access to my Asterisk server. With Fail2Ban set up, these IPs are banned after 5 attempts (for some reason, one got 6 attempts, but w/e). But I have no idea what the "Sending fake auth rejection ..." messages mean or how to stop these potential intrusion attempts. As far as I can tell, they have never been successful (haven't seen any weird charges on my bills or anything). Here's what I have done: Set up hardware firewall rules as shown below. Here, xx.xx.xx.xx is the IP address of the server, yy.yy.yy.yy is the IP address of our facility, and aa.aa.aa.aa, bb.bb.bb.bb, and cc.cc.cc.cc are the IP addresses that our VoIP provider uses. Theoretically, ports 10000-20000 should only be accessible by those three IPs.+-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ | Order | Source Ip | Protocol | Direction | Action | Destination Ip | Destination Port | +-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ | 1 | cc.cc.cc.cc/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 2 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 80 | | 3 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2749 | | 4 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 443 | | 5 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 53 | | 6 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1981 | | 7 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1991 | | 8 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2001 | | 9 | yy.yy.yy.yy/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 137-138 | | 10 | yy.yy.yy.yy/255.255.255.255 | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 139 | | 11 | yy.yy.yy.yy/255.255.255.255 | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 445 | | 14 | aa.aa.aa.aa/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 17 | bb.bb.bb.bb/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 18 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1971 | | 19 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2739 | | 20 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1023-1050 | | 21 | any | all | inbound | deny | any on server | 1-65535 | +-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ Set up Fail2Ban. This is sort of working, but it's reactive instead of proactive, and doesn't seem to be blocking everything (like the "Sending fake auth rejection ..." messages). Set up rules in sip.conf to deny all except for my VoIP provider. Here is my sip.conf with almost all commented lines removed (to save space). Notice at the bottom is my attempt to deny all except for my VoIP provider:[general] context=default allowguest=no allowoverlap=no bindport=5060 bindaddr=0.0.0.0 srvlookup=yes disallow=all allow=g726 allow=ulaw allow=alaw allow=g726aal2 allow=adpcm allow=slin allow=lpc10 allow=speex allow=g726 insecure=invite alwaysauthreject=yes ;registertimeout=20 registerattempts=0 register = user:pass:[email protected]:5060/700 [mysipprovider] type=peer username=user fromuser=user secret=pass host=sip.mysipprovider.com fromdomain=sip.mysipprovider.com nat=no ;canreinvite=yes qualify=yes context=inbound-mysipprovider disallow=all allow=ulaw allow=alaw allow=gsm insecure=port,invite deny=0.0.0.0/0.0.0.0 permit=aa.aa.aa.aa/255.255.255.255 permit=bb.bb.bb.bb/255.255.255.255 permit=cc.cc.cc.cc/255.255.255.255

    Read the article

  • Dual NVidia graphics cards in Ubuntu / xorg.conf mania

    - by John Zwinck
    I have two NVidia graphics cards: Quadro NVS 295 (PCI Express, dual DisplayPort outputs) GeForce FX 5200 (PCI, DVI and VGA outputs) I have three identical monitors, two on DisplayPort and one on DVI. I'm on Ubuntu Hardy (and cannot currently dist-upgrade for separate reasons). I use the "nvidia" driver. What's new is the GeForce card and the third monitor. I currently have the dual DisplayPort monitors working fine. Here are the display-related parts of my xorg.conf: Section "ServerLayout" Identifier "Default Layout" Screen "PCI-Express Screen" 0 0 # adding this makes X fail to start: Screen "PCI Screen" 0 Inputdevice "Generic Keyboard" Inputdevice "Configured Mouse" EndSection Section "Module" Load "glx" # not sure why/if this is needed EndSection Section "Monitor" Identifier "DELL 2408WFP" Option "DPMS" EndSection Section "Device" Identifier "NVIDIA Quadro NVS 295" Driver "nvidia" Option "RenderAccel" "true" Screen 0 BusID "PCI:2:0:0" EndSection Section "Device" Identifier "NVIDIA GeForce FX 5200" Driver "nvidia" Option "RenderAccel" "true" Screen 1 BusID "PCI:6:4:0" EndSection Section "Screen" Identifier "PCI-Express Screen" Device "NVIDIA Quadro NVS 295" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+1200, 1920x1200 +0+0" EndSection Section "Screen" Identifier "PCI Screen" Device "NVIDIA GeForce FX 5200" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+0" EndSection I use nvidia-settings to configure my monitors, and it does not show the second GPU. lspci, though, shows: 02:00.0 VGA compatible controller: nVidia Corporation Unknown device 06fd 06:04.0 VGA compatible controller: nVidia Corporation NV34 [GeForce FX 5200] Which is where I got the BusID settings for the two devices (when I just had one device, I didn't have any BusID listed...and adding the BusID hasn't broken anything). What am I missing? How can I make nvidia-settings show my second GPU so I can then configure its monitor?

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • How can I set my bootloader to load my primary (C:) partition?

    - by acidzombie24
    I created 4 partitions and want to use them to have seperate Windows XP, Windows 7, (possibly) Windows Vista installations, and "WinDummy" (to test applications in Vista, XP or another OS). I used Norton Ghost to install an OS to the drive in about 3 minutes. My problem is that I installed the spare first on the 4th partition, then Windows 7 on the second. I tried to set the bootloader (with easybcd) to use the first partition - but it doesn't want to. Heres my debug screen on easybcd As you can see, the device is set to H and i cant figure out how to change it. I can make my bootloader use Windows 7 first, but I can't make it use my C: install of XP instead of my spare H:. How would I fix this? Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=H: description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {bc2d8409-8640-11de-aa7e-a477d86453c4} resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} displayorder {bc2d8409-8640-11de-aa7e-a477d86453c4} {bc2d8406-8640-11de-aa7e-a477d86453c4} {bc2d8404-8640-11de-aa7e-a477d86453c4} {466f5a88-0af2-4f76-9038-095b170dc21c} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 3 Real-mode Boot Sector --------------------- identifier {bc2d8409-8640-11de-aa7e-a477d86453c4} device partition=C: path \NTLDR description Windows XP Windows Boot Loader ------------------- identifier {bc2d8406-8640-11de-aa7e-a477d86453c4} device partition=D: path \Windows\system32\winload.exe description Windows 7 locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {bc2d8407-8640-11de-aa7e-a477d86453c4} recoveryenabled Yes osdevice partition=D: systemroot \Windows resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} nx OptIn Windows Boot Loader ------------------- identifier {bc2d8404-8640-11de-aa7e-a477d86453c4} device partition=E: path \Windows\system32\winload.exe description Blank osdevice partition=E: systemroot \Windows Windows Legacy OS Loader ------------------------ identifier {466f5a88-0af2-4f76-9038-095b170dc21c} device partition=H: path \ntldr description Windows XP Spare

    Read the article

  • How do you monitor SSD wear in Windows when the drives are presented as 'generic' devices?

    - by MikeyB
    Under Linux, we can monitor SSD wear fairly easily with smartmontools whether the drive is presented as a normal block device or a generic device (which happens when the drive has been hardware RAIDed by certain controllers such as the one on the IBM HS22). How can we do the equivalent under Windows? Does anyone actually use smartmontools? Or are there other packages out there? The problem is that SCSI Generic devices just don't show up in Windows. If the drives aren't RAIDed we can see them fine. How I'd do it in Linux: sles11-live:~ # lsscsi -g [1:0:0:0] disk SMART USB-IBM 8989 /dev/sda /dev/sg0 [2:0:0:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg1 [2:0:1:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg2 [2:1:8:0] disk LSILOGIC Logical Volume 3000 /dev/sdb /dev/sg3 sles11-live:~ # smartctl -l ssd /dev/sg1 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 26~ Percentage Used Endurance Indicator |_ ~ normalized value sles11-live:~ # smartctl -l ssd /dev/sg2 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 3~ Percentage Used Endurance Indicator |_ ~ normalized value

    Read the article

  • Second ip address on same interface CentOS 6.3

    - by user16081
    I tried to add a second LAN addresses in CentOS 6.3 on a brand new install and it's not working. I installed a new copy of CentOS 5.7 and tried the same and it worked right away. Now I'm just trying to setup the alias on the same subnet and it's not working. what am i doing wrong, is this not possible on CentOS 6.3? second ip address on the same interface but on a different subnet CentOS 5.7 it works: DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:01:6F:89 IPADDR=192.168.0.167 NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes DEVICE=eth0:0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:01:6F:89 IPADDR=192.168.0.166 NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes On CentOS 6.3: does not work DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:1E:DE:86 IPADDR=192.168.0.242 NETMASK=255.255.255.0 NETWORK=192.168.0.0 GATEWAY=192.168.0.1 ONBOOT=yes DNS1=205.134.232.138 DNS2=4.4.4.4 DEVICE=eth0:0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:0C:29:1E:DE:86 IPADDR=192.168.0.240 NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes # /etc/init.d/network restart Shutting down interface eth0: Device state: 3 (disconnected) [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK Bringing up interface eth0: Active connection state: activated Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/3 [ OK ] # ping 192.168.0.240 PING 192.168.0.240 (192.168.0.240) 56(84) bytes of data. From 192.168.0.242 icmp_seq=2 Destination Host Unreachable Appreciate any advice, thanks Update: Perhaps this is relevant? On CentOS 5.7: # dmesg |grep eth eth0: registered as PCnet/PCI II 79C970A eth0: link up eth0: link up On 6.3: # dmesg | grep eth e1000 0000:02:00.0: eth0: (PCI:66MHz:32-bit) 00:0c:29:1e:de:86 e1000 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None 8021q: adding VLAN 0 to HW filter on device eth0 eth0: no IPv6 routers present

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Windows 8 & Hyper-V Can't Bridge Wifi Connection

    - by xinunix
    So I have an odd issue that I can't quite figure out... I am running Windows 8 Enterprise on a Dell 6420 laptop. I have a Broadcom 802.11n wireless adapter. I am connected to an home router (Netgear WNDR3700) that is connected to the internet. It is a very simple home network setup. I am trying to stand-up a few VMs in Hyper-V and want the VMs to be able to access the internet over my wireless connection. I have found numerous examples of how to set this up using both External and Internal Virtual Switches but have yet to be able to get it to work on my machine. I have narrowed the issue down to the fact that my host machine always loses internet connection when I bridge my wifi connection (both when it is bridged automatically by windows when I setup an external virtual switch bound to the wifi adapter or if I do it manually by creating an internal virtual switch, right click on it and my wifi network and select "Bridge Connections".) In both cases after the bridge is established, my host machine can no longer connect to the internet. I am not sure where to start with troubleshooting this problem. After the bridge is setup, an ipconfig shows all netowrk devices on the machine as "Media Disconnected". I do know that the wireless adapter is connected to the router b/c it shows the connection as active and full-strength. The only thing I can possibly think of is that this machine also has the Cisco VPN client installed on it which installs a Cisco Virtual Network Adapter. Is it possible that this Cisco Virtual Adapter is causing me issues when I try to bridge? I saw some people had a similar issue with a VirtualBox virtual adapter when trying to share via Hyper-V. Any thoughts or suggestions on how to troubleshoot?

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >