Search Results

Search found 67192 results on 2688 pages for 'excel external data'.

Page 607/2688 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Trying to grok Linux quotas, where is the data stored?

    - by CarpeNoctem
    So all the tutorials and documentation for the Linux quota system has left me confused. For each filesystem with quotas enabled/on where is the actual quota information stored? Is it filesystem metadata or is it in a file? Say user foo creates a new file on /home. How does the kernel determine whether user foo is below their hard limit? Does the kernel have to tally up quota information on that filesystem each time or is it in the superblock or somewhere else? As far as I understand, the kernel consults the aquota.user file for the actual rules, but where is the current quota usage data stored? Can this be viewed with any tools outside repquota and the like? TIA!! Update: Thanks for the help. I had already read that mini-HOWTO. I am pretty clear on the usage of the user space tools. What I was unclear on is whether the usage data was ALSO in the file that stored per-user limits and you answered this with a yes. From what I can tell, rc.sysinit runs quotacheck and quotaon on startup. The quotacheck program analyzes the filesystem, updates the aquota.* files. It then makes use of quota.h and the quotactl() syscall to inform the kernel of quota info. From this point forward the kernel hashes that information and increments/decrements quota stats as changes occur. Upon shutdown, the init.d/halt script runs the quotaoff command RIGHT before the filesystems are unmounted. The quotaoff command does not appear to update the aquota.* files with the information the kernel has in memory. I say this because the {a,c,m}times for the aquota.user file are only updated upon a reboot of the system or by manual running the quotacheck command. It appears - as far as I can tell - that the kernel just drops it's up-to-date usage data on the floor at shutdown. This information is never used to update the aquota.* files. They are updated during startup by quotacheck(rc.sysinit). Seems silly to me since that updated info had already been collected by the kernel. So...in conclusion I am still not entirely clear on the methods. ;)

    Read the article

  • What's the correct approach for passing data from several models into a service?

    - by Doug Chamberlain
    I have an AccountModel and a page where the user can upload a file. What I would like to have happen is when the user uploads the file. The PageController does something like the following. this is a quick attempt just written in the question to illustrate my question. public class PageController : Controller { private Service service; public ActionResult Upload(HttpPostedFileBase f){ service.savefile(f,_AccountModel_whatever.currentlyloggedinuser.taxid) } } public class Service { // abunch of validation and error checking to make sure the file is good to store } Wouldn't this approach be in bad practice? Since I'm making my controller dependent on the existence of th AccountModel? This will become a HUGE program over the next few years, and I really want to maximize the quality of the framework now.

    Read the article

  • The good SQL database to process a lot of data?

    - by Dorian
    I have to process like 10-100 millions records. I have to give the data to the client when it's finish. The data is givent as SQL requests to execute in the database. He have a powerful server with MySQL, I think it will be fast enough. The issue is my computer is not as powerful as his server, so I would like to use an other SQL server who is compatible (I export his database and import it in my computer) with MySQL but more powerful. What should I use? Or am I doomed to use MySQL?

    Read the article

  • How to "paint" the data layer of a CD using a CD drive?

    - by Jens
    I am looking for software to "paint" geometric shapes, dots or lines on the data layer of a writable CD (or DVD) using a standard drive. These do not have to be visible to the naked eye; I'd try to abuse the small dot size on the CD for some scientific measurements. I am aware of the "LightScribe" feature of some drives and that is not what I am looking for. Most of the software available is of course limited to write music or data, on does not offer the low-level "place a dot at this radius, this angle"-functionality. Is there something out there for me?

    Read the article

  • Is it reasonable to make a RAID-1 array with a ram disk and a physical disk to maximize read performance and protect data?

    - by Petr Pudlák
    In one of the answers on SO (I forgot which one) I've seen a suggestion to make a RAID-1 array composed of a RAM disk and a physical partition. By adding the physical partition with --write-mostly and enabling --write-behind the system should read everything instantly from the RAM disk but still save all data to the physical partition so that the data are preserved and the RAID array can be assembled again after reboot. Is such a setup reasonable? Will it perform any better in some scenario than having just the physical partition and perhaps tweaking the kernel to favor disk cache (swappiness and vfs_cache_pressure)?

    Read the article

  • Announcement: ZFS Backup Appliance

    - by uwes
    Announcing Product Software Changes for Sun ZFS Backup Appliance Effective December 4th, 2012, Replication and Cloning software licenses are no longer mandatory purchases with Sun ZFS Backup Appliance.   Replication and Cloning are still available as optional additions on new Sun ZFS Backup Appliance quotes, or as additions to existing systems. For More Product Information Go To External: ZFS Storage Appliance Oracle.com page External: ZFS Storage Appliance Oracle Technical Network.com page External: Software download support.oracle.com page

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • Export Import error 'SSIS Data Flow Task could not be created' ... registering DTSPipeline.dll, cannot create task "STOCK:PipelineTask"

    - by Moin Zaman
    I'm about to throw in the towel on this one. Running SQL Server 2008 enterprise on Windows 7 x64. Can't get past this issue. When I try to Import / Export Data from databases through SQL Server Management Studio I get the following Error. Error: TITLE: SQL Server Import and Export Wizard ------------------------------ The SSIS Data Flow Task could not be created. Verify that DTSPipeline.dll is available and registered. The wizard cannot continue and it will terminate. ------------------------------ ADDITIONAL INFORMATION: Cannot create a task with the name "STOCK:PipelineTask". Verify that the name is correct. ({0194F10C-9860-4A4F-AF8B-DE7EFD89859F}) I have tried many solutions found via Google, but none of them have worked. A side issue that may be related is when I try to create an Integration Services Project in Business Intelligence Studio I get a 'project creation failed' error.

    Read the article

  • Segment subdomains with Google Analytics?

    - by andrewpthorp
    So, when a website has multiple subdomains: www.example.com foo.example.com bar.example.com What is the best way to use Google Analytics to segment the data? I would prefer have access to 'All Data', 'Data from foo.example.com', and 'Data from bar.example.com'. I tried setting up 3 different views, and setting a filter on the foo/bar views that says: Include only traffic from the ISP domain that are equal to foo.example.com. However, I am not seeing any data collected into that View. I do, however, see all data in the 'All Data' view, but I can't figure out how to segment the data. I am including the analytics.js in the application.haml layout, which is always loaded in this app. Thanks!

    Read the article

  • Recomendation for Webshop with API

    - by m.sr
    I'm searching for a webshop. The problem with my search is, that the webshop-software of my choice needs to have a useabel API or some interface for external applications. E.g. i need to place orders by an external application or need to get product descriptions or warehouse stock from the external application. I somehow would like to have a webshop wehere the webinterface is just one way to interact with the whole system. There are some other requirments, which have to be fullfilled, but i guess they are kind of common: running on linux MySQL (we already have MySQL-replication and backup in place) i like open source but i'm willing to pay for it, if it's worth it I found some webshops on the net - but perhaps you can tell me, if theres any hope for a webshop with a good API before i go and test all of them, on the first look i didn't find any docs about any interface to external applications for any of my search results. Thank you!

    Read the article

  • How do I set up two existing disks with identical contents as a single mirrored volume in Windows 7 without losing data?

    - by Software Monkey
    I have two data disks that were, heretofore, in a mobo RAID configuration in Windows 7. They are now separate AHCI disks, visible in Computer Management. How to I go about making them a single mirrored volume in Windows? Note: The data is backed up up on two other separate disks, but it's a fair amount of work to do a restore (over 120'000 files, and I have to reset permissions). Note2: Currently the two disks are identical, and I can use the content of either one for this.

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • Where can I get a list or data base of light reflectance values for different materials?

    - by mikidelux
    I'm implementing lighting for a WebGL app but I'm not an artist so I don't know how to generate or where to obtain a list of materials with its values (diffuse, specular, ambient and shininess). I've been searching a lot but with no luck. Is there any list or DB I might have overlooked? Any common repository or something similar? Thanks in advance. Note: English is not my main language, let me know if you don't understand something and I'll try to rephrase it.

    Read the article

  • Is an extra collision-mesh for level-data worth the hassle?

    - by Serthy
    What is the optimal approach for collision-detection with the environment in an 3D engine (with triangle mesh based geometry, no bsp)? A) Use the render mesh [+] no need for additional work for artists to fiddle with collision detection [-] high detail is harder for physics calculation [+/-] maybe use collidable flags for materials [+/-] compute the collision-mesh from the render-mesh B) Use an additional collision mesh [+] faster/more optimal collision-detection [-] additional work (either by the artist or by the programmer who has to develop an algorithm to compute it from the render-mesh) [-] more memory useage How do AAA title handle this? And what are the indie dev's approaches?

    Read the article

  • How do I scale EC2 and push out code / data to my instances?

    - by chris
    Unfortunately I only have a limited knowledge of server architecture, I come from a development background. I am looking to ensure my new app can scale properly using EC2. I currently have a T1.micro for development running Windows with SQL server 2008. The system allows students to come to our site to search for a mentor, update their profile with pictures and employment history etc. Roughly the same sort of work as a LinkedIn profile. I need this to be able to scale very quickly without wasted resources. I understand the following is important. Separation of data, application etc. I will achieve this I think by hosting images using S3, Database instance via RDS and upgrade the EC2 instance. My main question is: How do I push data / code out to multiple ec2 / RDS instances seamlessly?

    Read the article

  • Why are the prices for broadband bandwidth at data centers much higher than consumer/small business offerings?

    - by odemarken
    The prices for broadband bandwidth at data centers are sometimes as much as 10x higher than for a typical small business/consumer connection, at least where I live. Now, I understand those are two differend kind of products, but what exactly are the differences? Is it mainly because the bandwidth you get at a data center is guaranteed (CIR), while a consumer offer lists maximal bandwidth (EIR/MIR)? Or are there other factors as well? (Note: my previous, much more specific question on the same general topic was closed as not constructive. I tried to extract the core issue and present it in a way that can be answered objectively. If you feel that this question is still bad and should be closed, please care to comment and explain why.)

    Read the article

  • How setup federated SQL data to display limited information to a Web server in the DMZ?

    - by Pcav
    I have a SQL server behind a firewall. I need to push some limited SQL 2005 information to a Web Server in the DMZ so that I do not have to let database queries come all the way into the database server on our internal network. I want to push a small amount of dynamic data to a Web server in the DMZ and lock it down so that our hosted website does not need to come into the internal network for information; I want to put a server in the DMZ that will be the only connection allowed to the SQL database. This DMZ server will be the only server that can have any sort of connection to the back-end database so the hosting provider just pull the data from our server in the DMZ...

    Read the article

  • Microsoft, where did you get those data about ODF?

    <b>Stop:</b> "Back then I knew, just as I know today, that there is no law or regulation in Italy, not even at the city level, that mandates ODF as the only accepted format for office documents, regardless of the context. What I did come across in the last year, instead, were cases where nobody seemed to know about ODF or law proposals..."

    Read the article

  • What Java data structure/design pattern best models this object, considering it would perform these methods?

    - by zundarz
    Methods: 1. getDistance(CityA,CityB) // Returns distance between two cities 2. getCitiesInRadius(CityA,integer) // Returns cities within a given distance of another city 3. getCitiesBeyondRadius(CityA,integer) //Returns cities beyond a given distance of another city 4. getRemoteDestinations(integer) // Returns all city pairs greater than x distance of each other 5. getLocalDestinations(integer) //Returns all city pairs within x distance of each other

    Read the article

  • Is having sensitive data in a PHP script secure? [closed]

    - by tkbx
    Possible Duplicate: What attributes of PHP make it insecure? I've heard that PHP is somewhat secure because Apache won't allow the download of raw PHP. Is this reliable, though? For example, if you wanted to password protect something, but didn't want to create a database, would something like $pass = "123454321"; be safe? Bottom line, is it safe to assume that nobody has access to the actual .php file?

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >