Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 347/677 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Setting up Raid 1 Array for Home Server

    - by user1048116
    I'm not sure if this is even possible, but it's worth asking on here! Essentially I have a old machine at home (well, not old hardware wise, but I recently built a new gaming rig), which I decided to install a copy of W2008 R2 on and use as a file/backup server and media center'ish machine. As of now, it has a single drive partitioned into C and D, with D being the Data partition. I have happened to find an old 1TB SATA drive lying around at home, and was wondering if it's possible to setup a Raid 1 array in my rig within Windows without needing to lose everything on my first drive (or maybe even just mirror a specific partition, say the Data partition, as this is just what stores my photos etc). Maybe this isn't possible, but you never know :) Regards, T.C

    Read the article

  • Centralizing a resource file among multiple projects in one solution (C#/WPF)

    - by MarkPearl
    One of the challenges one faces when doing multi language support in WPF is when one has several projects in one solution (i.e. a business layer & ui layer) and you want multi language support. Typically each solution would have a resource file – meaning if you have 3 projects in a solution you will have 3 resource files.   For me this isn’t an ideal solution, as you normally want to send the resource files to a translator and the more resource files you have, the more fragmented the dictionary will be and the more complicated it will be for the translator. This can easily be overcome by creating a single project that just holds your translation resources and then exposing it to the other projects as a reference as explained in the following steps. Step 1 Step 1 -  Add a class library to your solution that will contain just the resource files. Your solution will now have an additional project as illustrated below. Step 2 Reference this project to the other projects. Step 3 Move all the resources from the other resource files to the translation projects resource file. Step 4 Set the translations projects resource files access modifier to public. Step 5 Reference all other projects to use the translation resource file instead of their local resource file. To do this in xaml you would need to expose the project as a namespace at the top of the xaml file… note that the example below is for a project called MaxCutLanguages – you need to put the correct project name in its place.   xmlns:MaxCutLanguages="clr-namespace:MaxCutLanguages;assembly=MaxCutLanguages"   And then in the actual xaml you need to replace any text with a reference to the resource file. <TextBlock Text="{x:Static MaxCutLanguages:Properties.Resources.HelloWorld}"/> End Result You can now delete all the resource files in the other projects as you now have one centralized resource file.

    Read the article

  • Recommendation for a non-standard SSL port

    - by onurs
    Hey guys, On our server I have a single IP, and need to host 2 different SSL sites. Sites have different owners so have different SSL certificates, and can't share the same certificate with SAN. So as a last resort I have modified the web application to give the ability to use a specified port for secure pages. For its simple look I used port 200. However I'm worried about some visitors may be unable to see the site because of their firewalls / proxies blocking the port for ssl connections. I heard some people were unable to see the website, a home user and someone from an enterprise company, don't know if this was the reason. So, any recommendations for a non-standard SSL port number (443 is used by the other site) which may work for visitors better than port 200 ? Like 8080 or 8443 perhaps? Thanks!

    Read the article

  • Why does Windows share permissions change file permissions?

    - by Andrew Rump
    When you create a (file system) share (on windows 2008R2) with access for specific users does it changes the access rights to the files to match the access rights to the share? We just killed our intranet web site when sharing the INetPub folder (to a few specified users). It removed the file access rights for authenticated users, i.e., the user could not log in using single signon (using IE & AD)! Could someone please tell me why it behaves like this? We now have to reapply the access right every time we change the users in the share killing the site in the process every time!

    Read the article

  • Ubuntu running like mud, system hangs and looks like its running at 3FPS

    - by user240803
    the system specks: AMD XP 3200+ 1GB DDR 333 RAM 160 GB HD IDE NVIDIA FX 5500 AGP Video Card Compaq Presario sr1230nx the system takes forever to boot and when it does it runs like total mud, reminds me of an overloaded system that has too many windows open or something... fresh install tried soo many thing like new memory (it had 512 stick) a new video card (onboard 8mb sis sounded like the problem, but wasn't... has gotten a little faster now but not by much) tried to disable all the things on the motherboard that could be, with no help... this machine runs windows XP, 7, and 8 JUST FINE!!! I mean for a single core CPU WIN8 runs AWESOME!!! BUT I already have a Gaming Desktop that has Windows 8 pro I want a Linux machine to get some time in and learn a few things... I want Ubuntu because of the Software center so I can install things I want until I am familiar with the command line.. I've worked on Computers since I was 12 I remember some of the DOS commands but I guess these are a little different... anyway any ideas? Ive also tried both drivers for the NVIDA card and that didn't help either... its not the card since it did this with both the NVIDA card and the SIS onboard... it also does this on live mode with the USB so I don't think its the HardDrive... I'm running out of options of hardware to try... I know this version of Linux works cuz Ive booted it on other machines and it ran great... what is with this Compaq? here is a vid of exactly what its doing... let me know if you need anything else I am right by the comptuer tonight so ask anything... http://youtu.be/-P-XNo81098

    Read the article

  • Adobe Reader - Content Preparation progress

    - by kubal5003
    Hello, I was wondering recently if is it just me or anyone else noticed it: why Adobe Reader after opening every single document displays "Content preparation" window with progress bar and it lasts for ages.. ? On linux pdf readers work hell lot better (faster), on windows other readers also work faster. Some years back in the past Adobe Reader also used to be quick. What has happened? PDF files aren't bigger/more complex compared to 3-4 years ago. Computers are at least dual core and with much more ram and displaying pdf files is getting slower and slower..

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • foreign-architecture

    - by speedy-MACHO
    Always when I install something, I get the following error multiple times: Unknown configuration key 'foreign-architecture' found in your 'dpkg' configuration files. This warning will become a hard error at a later date, so please remove the offending configuration options and replace them with 'dpkg --add-architecture' invocations at the command line. When I try dpkg --add-architecture I get: Unknown configuration key `foreign-architecture' found in your `dpkg' configuration files. This warning will become a hard error at a later date, so please remove the offending configuration options and replace them with `dpkg --add-architecture' invocations at the command line. dpkg: error: --add-architecture takes one argument Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! I've no problems yet, but since it says This warning will become a hard error at a later date I better do something about this. When I search 'foreign-architecture', I find an empty file, containing not a single byte. I somehow can't delete that file. Please help, it's a kind of creapy...

    Read the article

  • How to configure auto-logon in Active Directory

    - by Jonas Stensved
    I need to improve our account management (using Active Directory) for a customer support site with 50+ computers. The default "AD"-way is to give each user their own account. This adds up with a lot of administration with adding/disabling/enabling user accounts. To avoid this supervisors have started to use shared "general" accounts like domain\callcenter2 etc and I don't like the idea of everyone knowing and sharing accounts and passwords. Our ideal solution would be to create a group with computers which requires no login by the user. I.e. the users just have to start the computer. Should I configure auto-logon with a single user account like domain\agentAccount? Is there anything else to consider if I use the same account for all users? How do I configure the actual auto-logon with a GPO on the group? Is there a "Microsoft way" without 3rd party plugins? Or is there a better solution?

    Read the article

  • emacs keybindings

    - by Max
    I read a lot about vim and emacs and how they make you much more productive, but I didn't know which one to pick. Finally when I decided to teach myself common lisp, the decision was straight forward: everybody says that there's no better editor for common lisp, than emacs + slime. So I started with emacs tutorial and immediately I ran into something that seems very unproductive to me. I'm talking about key bindings for cursor keys: forward/backward: Ctrl+f, Ctrl+b up/down: Ctrl+p, Ctrl+n I find these bindings very strange. I assume that fingers should be on their home rows (am I wrong here?), so to move cursor forward or backward I should use my left index finger and for up and down right pinky and right index fingers. When working with any of Windows IDEs and text editors to navigate text I usually place my right hand in a position so that my thumb is on the right ctrl and my index, ring and middle fingers are on the cursor keys. From this position it is very easy and comfortable to move cursor: I can do one-character moves with my 3 right fingers, or I can press ctrl with my right thumb and do word-moves instead. Also I can press shift with my left pinky and do single-character or word selections. Also it is a very comfortable position to reach PgUp, PgDn, Home, End, Delete and Backspace keys with my right hand. So I have even more navigation and selection possibilities. I understand that the decision not to use cursor keys is to allow one to use emacs to connect to remote terminal sessions, where these keys are not supported, but I still find the choice of cursor keys very unfortunate. Why not to use j, k, i, l instead? This way I could use my right hand without much finger stretching. So how is emacs more productive? What am I doing wrong?

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • Dynamic Subdomains

    - by crash
    On my new site I want to have dynamic subdomains. I'm trying to make it so that the subdomains use the same web root as the main domain, all under a single CodeIgniter installation. For example, subdomain.example.com would lead to example.com/subdomain, which is actually example.com/index.php/subdomain. I've already the DNS, virtual hosts set up but I"m getting caught up on the .htaccess. The effect of the linked htaccess is that when navigating to any subdomain, it gets caught up in an infinite loop. (Error log after one request.) It's the same effect for www., which should just resolve to the main domain.

    Read the article

  • Isopropyl okay to use on MacBook screen?

    - by Archagon
    I've always used a mixture of isopropyl alcohol and distilled water to clean my computer screens (50% water and 50% * 70% isopropyl). From what I understand, these are exactly the same ingredients used in most commercial screen cleaners, perhaps even more diluted. I recently used this solution to wipe off my 2010 MacBook Pro screen, and there don't seem to be any problems, but this support page explicitly says not to use isopropyl. Now I'm worried that I might have inadvertently damaged something. I'm also concerned because I once managed to dissolve the surface rubber lining of one of my mice with the isopropyl solution, and the MacBook Pro display has a thin rubber bezel keeping the glass in place. Why would Apple single out isopropyl on their support page? Should I be concerned?

    Read the article

  • NMap 6.01

    - by TATWORTH
    NMap 6.01 has been released at http://nmap.org/download.html"Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping)."Home page is at http://nmap.org/  Nmap is free to download and use. You can download the source and compile it yourself if you so require.

    Read the article

  • Object construction design

    - by James
    I recently started to use c# to interface with a database, and there was one part of the process that appeared odd to me. When creating a SqlCommand, the method I was lead to took the form: SqlCommand myCommand = new SqlCommand("Command String", myConnection); Coming from a Java background, I was expecting something more similar to SqlCommand myCommand = myConnection.createCommand("Command String"); I am asking, in terms of design, what is the difference between the two? The phrase "single responsibility" has been used to suggest that a connection should not be responsible for creating SqlCommands, but I would also say that, in my mind, the difference between the two is partly a mental one of the difference between a connection executing a command and a command acting on a connection, the latter of which seems less like what I have been lead to believe OOP should be. There is also a part of me wondering if the two should be completely separate, and should only come together in some sort of connection.execute(command) method. Can anyone help clear up these differences? Are any of these methods "more correct" than the others from an OO point of view? (P.S. the fact that c# is used is completely irrelevant. It just highlighted to me that different approaches were used)

    Read the article

  • Slow http traffic between VMWare guest and host.

    - by toluju
    I have a web application running as an http server inside the VMWare guest OS, and I'm trying to access the content from the host OS. The guest is running Ubuntu, and the host is running Windows XP. The problem is, when I try to access the application from a browser in the host OS, the content takes a very long time to load (up to a minute for a single page). A browser in the guest OS can access the application with no problems. I've tried using both NAT and bridged networking, but the results are the same. The Windows firewall is turned off. The connection itself appears fine, as ping requests from guest to host as well as host to guest complete without errors or delays. Both guest and host can access the external Internet connection without a problem. I'm using VMWare Player. Any ideas?

    Read the article

  • How to split audio into multiple channels from optical S/PDIF or 1/8"?

    - by Josh M.
    I have a motherboard which has an optical S/PDIF output or 1/8". I'd like to "split" that signal into the appropriate channels so that I can then connect that to the wires behind my car's headunit which, in turn, run to the amp. The factory Bose amp just takes a single connector with a million wires running out of it, so that's why I would need to separate the signal into separate channels. On the other end there are four RCA connectors: front left, front right, rear left, rear right. The sub-woofer signal does not require an additional connection. Edit: Revised to include S/PDIF or 1/8".

    Read the article

  • How to Structure a Trinary state in DB and Application

    - by ABMagil
    How should I structure, in the DB especially, but also in the application, a trinary state? For instance, I have user feedback records which need to be reviewed before they are presented to the general public. This means a feedback reviewer must see the unreviewed feedback, then approve or reject them. I can think of a couple ways to represent this: Two boolean flags: Seen/Unseen and Approved/Rejected. This is the simplest and probably the smallest database solution (presumably boolean fields are simple bits). The downside is that there are really only three states I care about (unseen/approved/rejected) and this creates four states, including one I don't care about (a record which is seen but not approved or rejected is essentially unseen). String column in the DB with constants/enum in application. Using Rating::APPROVED_STATE within the application and letting it equal whatever it wants in the DB. This is a larger column in the db and I'm concerned about doing string comparisons whenever I need these records. Perhaps mitigatable with an index? Single boolean column, but allow nulls. A true is approved, a false is rejected. A null is unseen. Not sure the pros/cons of this solution. What are the rules I should use to guide my choice? I'm already thinking in terms of DB size and the cost of finding records based on state, as well as the readability of code the ends up using this structure.

    Read the article

  • Strategy for hosting 700+ domains, each with static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to managing CNAMEs on the registrar-side, is there a way to quickly associate CNAMEs with new websites, maybe gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • easiest way to (download and) convert HTML-structures into EPUB?

    - by gojira
    How would one (download and) convert HTML-structures into EPUB (or any other format suitable for the Sony PRS-505 reader)? My question is not how to convert a single HTML file into an EPUB file, as this is easy; what I mean is, I have some books I want to read on my Sony PRS-505 and these books are most often online in HTML format but withmany interlinked pages and there is one page with the list of contents, like this example http://www.edge.org/documents/ThirdCulture/d-Contents.html ... or sometimes it's a little bit more complicated as the list of contents only lists the chapters, and inside the chapters there are links to sub-chapters, like in this example: http:SLASHSLASHwww.hyw.com/Books/WargamesHandbook/Contents.htm (I can only post 1 hyperlink now b/c of user restriction, so this is why there is SLASHSLASH instead of //) I want to convert these examples and several others, with correct chapters, images and some acceptable formatting etc, so basically I want to make a proper ebook out of the HTML-tree. What is the easiest way?

    Read the article

  • Google Chrome and kerberos authentication against Apache

    - by Lars
    I've managed to get kerberos authentication to work now with Apache and Likewise Open but so far, Google Chrome doesn't seem to play fair. Unless I start it with chrome.exe --auth-server-whitelist="*company.com" it does only pop-up a login window but will not accept any credentials at all. As far as I know, the --auth-server-whitelist option should only be used when trying to get Single-Sign-On (SSO) to work, but if you are fine with a log-in window it should work directly out of the box, but so far it doesn't. This is the error I get in the apache logs. [Tue Dec 13 08:49:04 2011] [error] [client 192.168.1.15] failed to verify krb5 credentials: Unknown code krb5 7

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • Two Upcoming Server Virtualization Webcasts

    - by Chris Kawalek
    We have a couple of interesting server virtualization webcasts coming up that you might be interested in. Have a look:  Webcast 1: October 23rd, 9 am PST Virtualized Infrastructure Simplified with Oracle VM and NetApp Storage and Data Management Solutions Point-and-Click Interface Deploys Virtualized Data Infrastructure in Minutes  Provisioning and deploying a virtual data infrastructure can be costly, time-consuming, and prone to error. Oracle VM and NetApp joint solutions, however, give you a single point-and-click interface to deploy your virtualized data infrastructure seamlessly in minutes. Join us in this live webcast to learn more from product experts and view a product demo. Register (for free!) here.  Webcast 2: November 7th, 9 am PST Report Shows Oracle VM Up to 10x Faster than VMware vSphere 5 in Time to Deployment Time is your IT department’s greatest commodity. So when a new report reveals that your IT staff can deploy Oracle Real Application Clusters (Oracle RAC) up to 10 times faster than a traditional install performed with VMware vSphere 5, it’s newsworthy. Join us in this live webcast to learn how you can realize your time savings. Register (for free!) here. 

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >