Search Results

Search found 20993 results on 840 pages for 'automatic storage management asm diskgroup redundancy metadata'.

Page 96/840 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • What is the role of traditional issue tracker when Scrum / Kanban board is used?

    - by Borek
    From a very high level view, to me it seems there are generally 2 types of Project Management tools: Traditional issue trackers like Fogbugz, JIRA, BugZilla, Trac, Redmine etc. Virtual card boards / agile project management tools like Pivotal Tracker, GreenHopper, AgileZen, Trello etc. Sure, they overlap in one way or another, e.g. Pivotal Tracker tasks can be imported to JIRA, GreenHopper itself is implemented on top of JIRA issue base etc. but I think one can still see the difference in orientation between those two types of tools. Traditional issue tracker seems to be used even in companies otherwise doing agile project management. My question is, why do they do that? I also feel that we should use an issue tracker in my company but when I'm thinking about it, I'm not actually sure why should we need it. For example, Trello development seems to be managed by using Trello itself (see this virtual wall) even though they have access to Fogbugz, one of the best issue trackers around. So maybe we don't need traditional issue tracker when we'll be doing 100% of our work in an agile manner using one of the agile PM tools?

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • Webcast: DB Enterprise User Security Integration with Oracle Directory Services

    - by B Shashikumar
    The typical enterprise has a large number of DBA (Database administrator) accounts that are locally managed, which is often very costly, problematic and error-prone. Databases are a crucial component of your enterprise IT infrastructure, housing sensitive corporate data and database user accounts and privileges. To ensure the integrity of your enterprise's data, it's imperative to have a well-managed identity management system. This begins with centralized management of user accounts and access rights. Enterprise User Security (EUS), an Oracle Database Enterprise Edition feature, combined with Oracle Identity Management, gives you the ability to centrally manage database users and their authorizations in one central place. The cost of user provisioning and password resets is dramatically reduced. This technology is a must for new application development and should be considered for existing applications as well. Join Oracle Advisors for a live webcast on Jul 11 at 8am Pacific Time where Oracle experts will briefly introduce EUS, followed by a detailed discussion about the various directory options that are supported, including integration with Microsoft Active Directory. We'll conclude how to avoid common pitfalls deploying EUS with directory services. To register for this event, click here  

    Read the article

  • PeopleSoft CRM 9.2 Release Value Proposition

    - by Race Bannon
    Oracle's PeopleSoft Customer Relationship Management (CRM) delivers solutions that have been tailored to fit your industry business processes, your customer strategies, and your success criteria. With PeopleSoft CRM 9.2, organizations will be able to deploy a solution that delivers built-in best practices specific to your industry with a highly configurable, tightly integrated platform, ensuring that solutions will be fast to implement. The result is less configuration, less customization, and less integration. PeopleSoft Customer Relationship Management (CRM) is a world-class solution for organizations of every size and Oracle’s planned product roadmap for PeopleSoft applications is to deliver valuable, needed features for all of an organization’s constituents along three design principles — Simplicity, Productivity, and Lowered Total Cost of Ownership — as well as new application functionality as prioritized by our customers. The upcoming 9.2 release of PeopleSoft Customer Relationship Management focuses on these themes of Simplicity, Productivity, and Lower Total Cost of Ownership while also delivering robust new functionality to help your organization succeed. The recently published PeopleSoft CRM 9.2 Release Value Proposition provides overviews of the new features and enhancements planned for these applications for Release 9.2. This document offers customers a road map intended to help them assess the business benefits of upgrading to the 9.2 release while also helping them plan their IT projects and investments. (Link is to a My Oracle Support page, available to customers and partners.) Oracle continues to deliver enterprise-wide features that enhance our customer ownership experience and helps them run their businesses more efficiently and profitably. With the CRM 9.2 release, we continue to abide by this firm commitment we’ve made to our customers.

    Read the article

  • How to reclaim storage for deleted LOBs

    - by Jim Hudson
    I have a LOB tablespace. Currently holding 9GB out of 12GB available. And, as far as I can tell, deleting records doesn't reclaim any storage in the tablespace. I'm getting worried about handling further processing. This is Oracle 11.1 and the data are in a CLOB and a BLOB column in the same table. The LOB Index segments (SYS_IL...) are small, all the storage is in the data segments (SYS_LOB...) We'e tried purge and coalesce and didn't get anywhere -- same number of bytes in user_extents. "Alter table xxx move" will work, but we'd need to have someplace to move it to that has enough space for the revised data. We'd also need to do that off hours and rebuild the indexes, of course, but that's easy enough. Copying out the good data and doing a truncate, then copying it back, will also work. But that's pretty much just what the "alter table" command does. Am I missing some easy ways to shrink things down and get the storage back? Or is "alter table xxx move" the best approach? Or is this a non-issue and Oracle will grab back the space from the deleted lob rows when it needs it?

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • MySQL " identify storage engine statement"

    - by sammysmall
    This IS NOT a Homework question! While building my current student database project I realized that I may want to identify comprehensive information about a database design in the future. More-so if I am fortunate enough to get a job in this field and were handed a database project how could I break down certain elements for identification... In all of my previous designs I have been using MySQL Community Server (GPL) 5.1.42, I thought (duh) that I was using the MyISAM based on most of my text-book instruction and MySQL 5.0 Reference Manual :: 13 Storage Engines :: 13.1 The MyISAM Storage Engine I determined that this was in fact incorrect for this version and the use of "SHOW ENGINES" at the console... No problem, figured out why they have "versions" the need to pay attention to what version is being used, and the need for a means to determine what I am about to mess up "if" I do not pay attention to detail... Q1. Specifically what statement will identify the version used by someone elses initial database creation? (since I created my own databases I know what version I used) Q2. Specifically what statement will identify the storage engine that the developer used when creating the database. (I specified a particular database in my collection then tried SHOW Engine, did not work, then tried to just get the metadata from one table in that database: mysql SELECT duck_cust, table_type, engine - FROM INFORMATION_SCHEMA.tables - WHERE table_schema = 'tp' - ORDER BY table_type ASC, table_name DESC; as this was not really what I wanted (and did not work) I am looking for some direction from the pros... Q3. (If you really have the inclination to continue helping) If I were to access a database from an earlier/later "version" are there backward/forward compatibility issues for maintaining/updating data between versions? Please and Thank you in advance for your time and efforts! sammysmall

    Read the article

  • Do all the HTML5 storage systems work together ?

    - by azera
    While there are a lot of good stuff about html5, one thing I don't get is the redondant storage mechanism, first there is localstorage and sessionstorage, which are key value stores, one is for one instance of the app ("one tab"), and the other works for all the instances of that application so they can share data. Both are saved when you close your browser and have a limited size (usually 5MB), that's great and everything would be nice if we stopped there. But then there is the "Web SQL Database", which has the same security system as the localstorage, the same size limit, the same everything except it works like/is sqlite, with tables and sql syntax and all of that. And the bummer is, they don't work on the same data at all ! This is not two way to access your data, this is really two storage for every html 5 app out there (not created by default yes, but still you see my point). What I would like to know is, is there a reason for both of this mechanisms to exist at the same time ? Or did they just look at sql and nosql movement to pick the best then went "screw it let's add both !" ? Why not implement local/session storage as a table inside web sql db ?

    Read the article

  • Convert a binary tree to linked list, breadth first, constant storage/destructive

    - by Merlyn Morgan-Graham
    This is not homework, and I don't need to answer it, but now I have become obsessed :) The problem is: Design an algorithm to destructively flatten a binary tree to a linked list, breadth-first. Okay, easy enough. Just build a queue, and do what you have to. That was the warm-up. Now, implement it with constant storage (recursion, if you can figure out an answer using it, is logarithmic storage, not constant). I found a solution to this problem on the Internet about a year back, but now I've forgotten it, and I want to know :) The trick, as far as I remember, involved using the tree to implement the queue, taking advantage of the destructive nature of the algorithm. When you are linking the list, you are also pushing an item into the queue. Each time I try to solve this, I lose nodes (such as each time I link the next node/add to the queue), I require extra storage, or I can't figure out the convoluted method I need to get back to a node that has the pointer I need. Even the link to that original article/post would be useful to me :) Google is giving me no joy. Edit: Jérémie pointed out that there is a fairly simple (and well known answer) if you have a parent pointer. While I now think he is correct about the original solution containing a parent pointer, I really wanted to solve the problem without it :) The refined requirements use this definition for the node: struct tree_node { int value; tree_node* left; tree_node* right; };

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • Ubuntu 10.04 recognizing USB 2.0 external HD as USB 1.1

    - by btucker
    When I connect the USB 2.0 drive I see this: usb 1-4.3: new full speed USB device using ohci_hcd and address 5 so I know it's getting seen as USB 1.1. usb-devices shows that it really is USB 2.0 and connected to a USB 2.0 hub: T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=05e3 ProdID=0608 Rev=77.61 S: Product=USB2.0 Hub C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=13fd ProdID=1340 Rev=02.10 S: Manufacturer=Generic S: Product=External C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage It seems the problem is that root hub is: T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=02.06 S: Manufacturer=Linux 2.6.32-25-server ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub And there's no mention of ehci_hcd. lsusb -t gives me: /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ohci_hcd/10p, 12M |__ Port 4: Dev 2, If 0, Class=hub, Driver=hub/4p, 12M |__ Port 2: Dev 4, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 3: Dev 5, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 6: Dev 3, If 0, Class=stor., Driver=usb-storage, 12M It seems like I'm missing something which would allow the OS to see USB 2.0 devices. Can anyone point me in the right direction? EDIT Full lsusb -v output: Bus 001 Device 005: ID 13fd:1340 Initio Corporation Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x13fd Initio Corporation idProduct 0x1340 bcdDevice 2.10 iManufacturer 1 Generic iProduct 2 External iSerial 3 57442D574341595930323337 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x05e3 Genesys Logic, Inc. idProduct 0x0608 USB-2.0 4-Port HUB bcdDevice 77.61 iManufacturer 0 iProduct 1 USB2.0 Hub iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x00e0 Ganged power switching Ganged overcurrent protection Port indicators bPwrOn2PwrGood 50 * 2 milli seconds bHubContrCurrent 100 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0103 power enable connect Port 3: 0000.0103 power enable connect Port 4: 0000.0100 power Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.32-25-server ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:02.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 11 bDescriptorType 41 nNbrPorts 10 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 1 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 0x00 PortPwrCtrlMask 0xff 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0103 power enable connect Port 5: 0000.0100 power Port 6: 0000.0103 power enable connect Port 7: 0000.0100 power Port 8: 0000.0100 power Port 9: 0000.0100 power Port 10: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • Need a place to store a few bytes of meta information on storage media

    - by Jason C
    I'm working on an embedded project. I need a place to store some filesystem-independent meta information on a storage device. The device has an MSDOS partition table. The device also may have unallocated space (depending on its size) but it will be TRIMmed (and also may be blown away by new partitions in the future). I need a location on the device that is not unallocated and that has a low risk of being touched (outside of completely erasing the device). The device is only guaranteed to have an MBR at the point the meta data needs to first be written; meaning there are no EBRs/VBRs present that I could use. There are 446 bytes at the very start of the device available for MBR bootstrap code. Currently my only idea is to store data at the end of this block. However, the device is bootable and I have no way of knowing if I'd be blowing away bootstrap code or not. The sector size is 512 bytes and the MBR is the first sector, I'm pretty sure (correct me if I'm wrong) that that means the second sector is available for use by partition data, so I can't use that either. Does anybody have any ideas? I need 4 bytes of space.

    Read the article

  • Turn computer into DAS (Direct Attached Storage)

    - by Damon
    Can we build a direct attached storage by taking a computer/server, adding an HBA, and installing some appropriate software? We would use Debian as a host OS for both the DAS and the server. If so, what software do we use? And do we simply need a HBA for the DAS and the Server? Or do we need more hardware? The goal is to use an older server that does not have enough room for drives but does have ECC memory, server processors, redundant power supplies, dual nics, etc. Then find any boxes, server or not, the key being having enough room for 8-12 drives, fans, etc. and turning them into a DAS; build two of these DAS's and have them connected to the server. Eventually we want to have two servers using DRBD and associated services like heartbeat and pace maker to create an HA setup for our server(s) but that will take a long time to configure since I have no experience with anything related to DRBD (yet) and have a learning curve I have to get past, not to mention the additional cost of more hardware (two servers vs one).

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • ASP.NET MVC 2 generation of the List/Index view

    - by Klas Mellbourn
    ASP.NET MVC 2 has powerful features for generating the model-dependent content of the Edit view (using EditorForModel) and Details view (using DisplayForModel) that automatically utilizes metadata and editor (or display) templates: <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend><%= Html.LabelForModel() %></legend> <%= Html.EditorForModel() %> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> However, I cannot find any comparable tools for the "last" step of generating the Index view (a.k.a. the List view). There I have to hard code the columns first in the row representing the headers and then inside the foreach loop: <h2>Index</h2> <table> <tr> <th></th> <th> ID </th> <th> Foo </th> <th> Bar </th> </tr> <% foreach (var item in Model) { %> <tr> <td> <%= Html.ActionLink("Edit", "Edit", new { id=item.ID }) %> | <%= Html.ActionLink("Details", "Details", new { id=item.ID })%> | <%= Html.ActionLink("Delete", "Delete", new { id=item.ID })%> </td> <td> <%= Html.Encode(item.ID) %> </td> <td> <%= Html.Encode(item.Foo) %> </td> <td> <%= Html.Encode(String.Format("{0:g}", item.Bar)) %> </td> </tr> <% } %> </table> What would be the best way to generate the columns (utlizing metadata such as HiddenInput), with the aim of making the Index view as free of model particulars as Edit and Details?

    Read the article

  • Using PHP, how to parse the title and meta tags from a HTML page?

    - by Dylan Taylor
    Hey guys, I need to be able to get the TITLE and DESCIPTION metadata out of a page. I've been trying to do this but I've been getting more errors than actual results. (I have an array of about 10 URLS, usually only about 2 of them give me the descrption. I have yet to get the title). So how do I, in PHP, get the Desc and Title from a remote page, and if there is none or if there's an error, ignore it? -Dylan

    Read the article

  • Problem serializing complex data using WCF

    - by Gustavo Paulillo
    Scenario: WCF client app, calling a web-service (JAVA) operation, wich requires a complex object as parameter. Already got the metadata. Problem: The operation has some required fields. One of them is a enum. In the SOAP sent, isnt the field above (generated metadata) - Im using WCF diagnostics and Windows Service Trace Viewer: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(TypeName="Consult-Filter", Namespace="http://webserviceX.org/")] public partial class ConsFilter : object, System.ComponentModel.INotifyPropertyChanged { private PersonType customerTypeField; Property: [System.Xml.Serialization.XmlElementAttribute("customer-type", Form=System.Xml.Schema.XmlSchemaForm.Unqualified, Order=1)] public PersonType customerType { get { return this.customerTypeField; } set { this.customerTypeField = value; this.RaisePropertyChanged("customerType"); } } The enum: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Xml.Serialization.XmlTypeAttribute(TypeName="Person-Type", Namespace="http://webserviceX.org/")] public enum PersonType { /// <remarks/> F, /// <remarks/> J, } The trace log: <MessageLogTraceRecord> <HttpRequest xmlns="http://schemas.microsoft.com/2004/06/ServiceModel/Management/MessageTrace"> <Method>POST</Method> <QueryString></QueryString> <WebHeaders> <VsDebuggerCausalityData>data</VsDebuggerCausalityData> </WebHeaders> </HttpRequest> <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Header> <Action s:mustUnderstand="1" xmlns="http://schemas.microsoft.com/ws/2005/05/addressing/none"></Action> <ActivityId CorrelationId="correlationId" xmlns="http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics">activityId</ActivityId> </s:Header> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <filter xmlns="http://webserviceX.org/"> <product-code xmlns="">116</product-code> <customer-doc xmlns="">777777777</customer-doc> </filter> </s:Body> </s:Envelope> </MessageLogTraceRecord>

    Read the article

  • Suggested php code to read file rating set by Adobe Bridge CS3

    - by DarwinIcesurfer
    Background: I have been attempting to read the rating that is assigned in Adobe Bridge CS3 using the creative commons Metadata toolkit for php without success. I am using shared hosting so I do not have an oppotunity to recompile php with different modules. Is php code available that could be used to read the rating that is embedded in the .jpg file? I have read that this is an xmp (xml) formatted section within the file.

    Read the article

  • How to iterate all query paths within an image header namespace in WIC?

    - by muruge
    Hello All, I am using Windows Imaging Component to read/write image metadata in my WPF application. I would like to know if there is an efficient way to know if any paths exist within a namespace. For instance I would like to know if any paths within IPTC namespace exist and if not I want to delete the namespace from the image header. Any pointers would be greatly appreciated. Thanks, Murugesh.

    Read the article

  • Is it possible to add IPTC data to a JPG using python when no such data already exists?

    - by ventolin
    With the IPTCInfo module under Python (http://snippets.dzone.com/posts/show/768 for more info) it's possible to read, modify and write IPTC info to pictures. However, if a JPG doesn't already have IPTC information, the module simply raises an exception. It doesn't seem to be able to create and add this metadata information itself. What alternatives are there? I've googled for the past hour but to no avail whatsoever.

    Read the article

  • MetadataException: Unable to load the specified metadata resource

    - by J. Steen
    All of a sudden I keep getting a MetadataException on instantiating my generated ObjectContext class. The connectionstring in App.Config looks correct - hasn't changed since last it worked - and I've tried regenerating a new model (edmx-file) from the underlying database with no change. Anyone have any... ideas? Edit: I haven't changed any properties, I haven't changed the name of any output assemblies, I haven't tried to embed the EDMX in the assembly. I've merely waited 10 hours from leaving work until I got back. And then it wasn't working anymore. I've tried recreating the EDMX. I've tried recreating the project. I've even tried recreating the database, from scratch. No luck, whatsoever. I'm truly stumped.

    Read the article

  • DataAnnotations Automatic Handling of int is Causing a Roadblock

    - by DM
    Summary: DataAnnotation's automatic handling of an "int?" is making me rethink using them at all. Maybe I'm missing something and an easy fix but I can't get DataAnnotations to cooperate. I have a public property with my own custom validation attribute: [MustBeNumeric(ErrorMessage = "Must be a number")] public int? Weight { get; set; } The point of the custom validation attribute is do a quick check to see if the input is numeric and display an appropriate error message. The problem is that when DataAnnotations tries to bind a string to the int? is automatically doesn't validate and displays a "The value 'asdf' is not valid for Weight." For the life of me I can't get DataAnnotations to stop handling that so I can take care of it in my custom attribute. This seems like it would be a popular scenario (to validate that the input in numeric) and I'm guessing there's an easy solution but I didn't find it anywhere.

    Read the article

  • WCF binding programatically and adding metadata excange

    - by totem
    Hi, My problem is im trying to enable mex on a service that uses net.tcp binding. that binding is for localhost port 5000, when i want to enable mex on the same port, and have it avilable for http i have to enable the HttpGetEnabled on the Service host. All this works well but when i try to add the binding it fails becuase the binding is "net.tcp://localhost:5000/test". is there a way to enable mex on the same port but with a diffrent uri? Without enableing NetTcpPortSharing. I dont think the code is the issue since i can add the MEX on a diffrent port through the code an it works fine, the question is how to have net.tcp://localhost:5000/test as the WCF tcp based enpoint and net.tcp://localhost:5000/test/mex as the http mex endpoint that gives the WSDL for the TCP endpoint. thanks, Totem

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >