Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 559/1159 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Convert RAID-0 to RAID-1 on HP ML350G6 with P410i zero memory

    - by JLe
    I have an HP ML350 G6 with a P410i zero memory RAID controller. As far as I can understand that means I can't expand a current single drive "RAID-0" configuration to a RAID-1 using the HP Offline ACU without installing memory and BBWC. Is that correct? What makes me think about this is the fact that expanding RAID-0 to RAID-1 should be pretty similar to replacing a failed drive in an already existing RAID-1? So then why can't I expand without memory and BBWC? Is my best option otherwise to (i) use Ghost to capture the disk, create a new RAID-1 with the existing drive and a new one or (ii) buy memory+BBWC and do it online? Thanks

    Read the article

  • Double audio cd ripping weirdness

    - by jqno
    Since I installed Ubuntu 12.04, Rhythmbox, Banshee and Sound Juicer have started acting weird around double cd's, and specifically, cd #2 of said double cd. Sometimes, they will show the information of cd #1. Track names, durations, and even count are incorrect. Sometimes, they will first show the tracks for cd #1, then continue onto cd #2 if cd #2 has more tracks than #1. Sound Juicer seems to be unable to find any track durations at all, even for single cd's. Obviously, this is a pain when I'm trying to rip double cd's. And I have a fair number of them, which I want to rip. This happens on both my machines (a slightly aging iMac, and a 1-year-old Sony Vaio). However, on previous versions of Ubuntu, this never happened. All on the same machines. So I suspect 12.04 is using a different lib for extracting audio cd data. Just for kicks, I tried with Linux Mint 13, and there it works correctly, even though it claims to be based on Ubuntu 12.04 and therefore should be using (partially) the same software. So if the Mint guys can fix it, I should be able to do it too, right? So, my question: what changed in 12.04 that could cause this? And more importantly: what can I do to fix it?

    Read the article

  • Web-based (intranet / non-hosted) timesheet / project tracking tools

    - by warren
    I realize some similar questions have been asked along these lines before, but from reading-through them today, it appears they don't match my use case. I am looking for a web-based, non-hosted time and project tracking tool. I've downloaded Collabtive so far, but am looking for other suggestions, too. My list of requirements: runs on standard LAMP stack non-hosted (ie, there is an option to download and run it on a local server) not a desktop/single-user application easy-to-use - my audience is a mix of technical and non-technical folks easy to maintain - when time for upgrading comes, I'd really like to not have to rebuild the app (a la ./configure ; make ; make install) needs to support multiple users free-form project additions: we don't have a central project management authority (users should be able to add whatever they're working on, not merely from a drop-down) Does anyone here have experience with such tools? It doesn't have to be free.. but free is always nice :)

    Read the article

  • Amazon EC2, fastest way to get a node into an existing cluster

    - by imaginative
    I'm new to Amazon AWS. A lot of the time I hear about people folks spawning instances and almost instantly putting them behind a load balancer and into an existing cluster. In the traditional world of managed machines, this would include provisioning hardware, installing an OS, configuring the network on the machine and once the network is available, use a tool of your choice such as CFengine, Puppet or Chef to bootstrap the machine based on its class. It seems like there are "shortcuts" that are able to get a server of a particular class up and running in Amazon EC2. If I have a particular stack running on my server, such as erlang, tomcat6 etc.. what's the fastest way to get these up and running and hooked into Amazon's load balancer? From network, to software stack to kernel tuning? Is it a combination of creating an AMI then running a tool like Puppet against the new instance? Any idea

    Read the article

  • Is there any site which tells or highlights by zone developer income source? I think i am getting less yearly [closed]

    - by YumYumYum
    http://www.itjobswatch.co.uk/ Thats the only one good site i have found, but missing for Belgium and for other European countries. I was searching a site which can tell the income source details for European developer (specially for Belgium). But mostly not a single website exist who tells the true. My situation: As a programmer myself in one company i use all my knowledge, 20++ hours a day (office , home, office, home) because every-time its challenge/complex/stress/over-time feature-requests, at the end of the year in Europe i was getting in total not even the lowest amount shown here (UK pound vs Euro + high-tech most of the time they use + speed in development): http://www.itjobswatch.co.uk/ In one company i have to use my best performance knowledge for whole year with: C, Java, PHP (Zend Framework, Cakephp), BASH, MySQL/DB2, Linux/Unix, Javascript, Dojo, JQuery, Css, Html, Xml. Company wants to pay lesser but always it has to be perfect and it has to be solid gold and diamond like quality. And the total amount in Europe is not sufficient for me if i compare with other countries and living cost in Europe including taxes. Is there any developers/community site where we can see by country, zone what is minimum to maximum income source getting offered to the developers? So that some developers who is in troubles, they can shout and speak up louder with those references?

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Teredo - how to connect to host behind NAT?

    - by Signum
    All I want to achieve is to establish connection to my simple server (written in C# using TcpListener class, if it makes any difference), on my computer which is behind NAT. It has some IPv6 address (it's public IP, starting with 2001:0) on Teredo interface. However, I cannot even ping it from outside my network, for instance I'm trying to ping this address from this website http://mebsd.com/ipv6-ping-and-traceroute, result - 100% packet loss. As I understood from reading about Teredo, there is no need for some port forwarding? So where could be the problem?

    Read the article

  • Desktop switcher appears to be broken for quick double-switches

    - by Jon Blackburn
    I'm wondering if anyone else has seen this. I have three virtual desktops aligned in a horizontal row. In the middle desktop I have only a single application window. I have keyboard shortcuts mapping to navigate between the desktops. Obviously, I never use the up/down arrows because I only have one row of workspaces. Here's the problem, which only started to happen after I installed 12.04.1: When I rapidly hit to go from workspace 1 to workspace 3, the window on workspace 2 gets moved to workspace 1. I have checked using both Unity and Gnome3, and the behavior is the same under both. If I change back to the default workspace setup (a 2x2 grid of desktops) things seem to settle down (i.e., no wandering windows). Not every type of application window behaves the same way. I couldn't get a Chrome browser to jump from 2 to 1, but both Terminal and Terminator exhibit the behavior. Any thoughts? Better workarounds? Thanks in advance.

    Read the article

  • XKB - remap arrow keys and preserve shift behaviour to select text

    - by dgirardi
    I realize arrow key remapping is an old problem, however I cannot seem to find a good solution that lets me select text with SHIFT + remapped keys as I would do with the vanilla arrow keys. For instance, if I remap Caps Lock to ISO_Level3_Shift and set xkb_symbols to read either key <AC08> { [ k, K , Down, Down] }; or key <AC08> { type="THREE_LEVEL", [ k, K , Down ] }; Pressing Shift+CapsLock+K will behave exactly as CapsLock+K (while Shift+Down behaves differently from Down alone). I had somewhat more success using higher level macro utilities and generating keyboard events (i.e. generate both the shift and the arrow keypresses); hoever that approach has a whole set of different problems - often the UI response to a simulated keypress is different from the "real" keypress, and there are performance problems as well - I can type faster than the thing can handle. Tl;dr; how can you shift-select using remapped arrow keys under X?

    Read the article

  • CPU load, USB connection vs. NIC

    - by T.J. Crowder
    In general, and understanding the answer may vary by manufacturer and model (and driver, and...), in consumer-grade workstations with integrated NICs, does the NIC rely on the CPU for a lot of help (as is typically the case with a USB controller, for instance), or is it fairly intelligent and capable on its own (like, say, the typical Firewire controller)? Or is the question too general to answer? (If it matters, you can assume Linux.) Background: I'm looking at connecting a device (digital television capture) that will be delivering ~20-50 Mbit/sec of data to a somewhat under-powered workstation. I can get a USB 2 High-speed device, or a network-attached device, and am interested in avoiding impacting the CPU where possible. Obviously, if it's a 100Mbit NIC, that's roughly half its theoretical inbound bandwidth, whereas it's only roughly a tenth of the 480 Mbit/second the USB 2 "High Speed" interface. But if the latter requires a lot of CPU support and the former doesn't...

    Read the article

  • How do I start Ubuntu without X server?

    - by Kaare Mikkelsen
    So, I'm trying to install the official nVidia drivers for my fancy graphics card, and they advice disabling the X server before installing, as well as making sure that I can boot without the X server, so as not to wreck anything. However, I seem to be doing something wrong. As I understand it, this should be as simple as changing the runlevel from 2 to 1? (I am aware that all this may simply be me not understanding runlevels) If that is correct, a quick test should be simply typing "sudo init 1" or "sudo telinit 1" in a terminal? Doing that makes the system attempt to shutdown, only it stops at the purple screen with the ubuntu logo and 5 white dots underneath. I haven't observed it get anywhere from there, I always end up holding down the power button. "sudo telinit 3" has not visible effect. Alternatively, I should be able to get there using the recovery mode, activated through the grub menu? I have very little success with that. After picking recovery mode, I am faced with a set of options about how to proceed. Both choosing the one with "network enabled" and "text only", I get a dialog explaining that this will mount my / file system in read/write mode, and whether this is what I want. I choose yes, and it seems to report that my drive is fine (there's a single line of text detailing the state of the partition). And then it stops. I haven't tried letting it sit for more than a few minutes, but presumably this process should be comparable in duration to a regular boot? I am not particularly fond of messing with any .conf-files until I am certain that I can handle things with training wheels on. So, I guess there are two questions: the one in the title, and "how do I start a text-only session without changing defaults?" Thanks in advance :)

    Read the article

  • Edit files from Cyberduck in an existing Vim window

    - by Eli Gundry
    I use Cyberduck as my go to FTP client on Windows. I have but one complaint, and that is whenever I click the edit button to edit the remote file with a local version of gVim, it opens in a new window/instance of Vim. This leads to a cluttered desktop as well as not allowing the AutoComplPop to work at it's full potential. What I would like to be able to do is automatically open every file in a new buffer inside of an existing gVim buffer instead of a new window, kind of like the Windows version of gVim and how it has the option to edit a file in a new buffer. Is there anyway to do this in Cyberduck/gVim setting?

    Read the article

  • SSO between multiple Flex applications

    - by KarthiPk
    We have three applications developed in Flex and all these use BlazeDS. These applications have their own authentication implementations (Database). Also they will be deployed in tomcat. Deploying all these applications in the same tomcat instance is acceptable for us. We want to bring the authentication credentials of all these applications into a single place and also provide SSO feature between these applications. We also want the authentication module to be configurable. Something like the system administrator can decide if the authentication should be done against a database or LDAP. Say, if the user successfully logs into app1, and when he access app2 in the same browser he should be automatically logged in. Same goes for logout as well. We have been exploring OpenAM, jGuard and JOSSO. I'm not sure if these require lot of customization to work with Flex. I would like to know how people are implementing SSO for Flex applications. Is there a common and simple SSO solution available for Flex based applications ?

    Read the article

  • Set 802.1Q tagged port on VLAN1 on Dell PowerConnect switch

    - by Javier
    I'm having big troubles when adding this Dell switch to my network. Here we use several VLANs to segment traffic. All switches (3com and DLink mostly) have configured the same VLANs, most ports are 'untagged' and belong to a single VLAN, except for the ports used to join together the switches (in a star topology), these ports belong to all VLANs and use 802.1Q tags. So far, it works really well. But on this new switch (a Dell PowerConnect 5448), the settings are very different (and confusing). I have configured the same VLANs, an the uplink ports are set in 'general' mode (supposed to be fully 802.1Q compliant), I can set the VLAN membership as 'T' on these ports for all VLANs except VLAN 1. It always stay as 'U' on VLAN 1. Any ideas?

    Read the article

  • Postfix count relayed messages per user

    - by Martino Dino
    I would like to know if it's possible to count the outgoing (relayed) messages on a per user basis in postfix. I'm managing a small commercial SMTP relay and decided that it would be nice to have a detailed daily report on how much mail a single user have sent (and eventually enforce some limits) possibly in realtime. I've looked almost everywhere and started to think that writing my own milter would be the way to go... Are you aware of anything that already exists for postfix that can count and report relayed mail for authenticated users (a script, milter or whatever)?

    Read the article

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • bind tmux prefix to OS X cmd key (or any other binding)

    - by rubenfonseca
    Hi all. I'm used to iTerm2 (or Terminal.app for this case) on OS X. But I want to move to use tmux (or screen, but the problem is similar to both apps). So my idea is to have a single iTerm tab with a tmux session opened with multiple tabs. To do the transition I have a basic feature I need to configure on tmux: switch the the tab 'n' by using cmd + n (like Firefox, Chrome, iTerm2 itself, etc) However I can't find a way of mapping the cmd key on the mac keyboard. I first tried to implement cmd as a prefix key, with no success. I've tried setting set-option -g prefix M-a (hoping for Meta-a) set-option -g prefix ^a (hoping for ^ to work) but nothing works. Is this possible? I don't really need to bind the prefix to cmd, but I want to be able to change tmux tabs with cmd+n. Thank you

    Read the article

  • How do I make compiling code not bring my system to its knees?

    - by Jason Baker
    I have a macbook with snow leopard and 2 gigs of RAM. When I compile C or C++ code, my system becomes all but unusable. For instance, when I compile llvm I notice that there are about 10 or 11 processes (cc1plus) getting launched at a time that suck up my CPU time and memory. Is there any way to maybe make it compile less at one time? I'll gladly wait a while longer to have my system usable while I'm compiling. Or is this something that you just have to live with when compiling C or C++?

    Read the article

  • MySQL Database synchronizing with local and remote with c#

    - by Neo
    I've posted this here as its more of a mysql questions than c#, I have written some software that runs a local instance of mysql when it first starts, now once mysql is up I would like to synchronize the data between the remote database table and the local database table that the software runs (it shouldn't sync any other databases / tables as there are a lot). I have replication setup to synchronize the entire database to another server which works unless the server goes down then it never comes back up, so based on that I don't think replication will work as when the software is closed it also closes MySQL. So what would be the best method of synchronizing the remote and local databases?

    Read the article

  • EJB Persist On Master Child Relationship

    - by deepak.siddappa(at)oracle.com
    Let us take scenario where in users wants to persist master child relationship. Here will have two tables dept, emp (using Scott Schema) which are having master child relation.Model Diagram: Here in the above model diagram, Dept is the Master table and Emp is child table and Dept is related to emp by one to n relationship. Lets assume we need to make new entries in emp table using EJB persist method. Create a Emp form manually dropping the fields, where deptno will be dropped as Single Selection -> ADF Select One Choice (which is a foreign key in emp table) from deptFindAll DC. Make sure to bind all field variables in backing bean.Employee Form:Once the Emp form created, If the persistEmp() method is used to commit the record this will persist all the Emp fields into emp table except deptno, because the deptno will be passed as a Object reference in persistEmp method  (Its foreign key reference). So directly deptno can't be passed to the persistEmp method instead deptno should be explicitly set to the emp object, then the persist will save the deptno to the emp table.Below solution is one way of work around to achieve this scenario -Create a method in sessionBean for adding emp records and expose this method in DataControl.     For Ex: Here in the below code 'em" is a EntityManager.            private EntityManager em - will be member variable in sessionEJBBeanpublic void addEmpRecord(String ename, String job, BigDecimal deptno) { Emp emp = new Emp(); emp.setEname(ename); emp.setJob(job); //setting the deptno explicitly Dept dept = new Dept(); dept.setDeptno(deptno); //passing the dept object emp.setDept(dept); //persist the emp object data to Emp table em.persist(emp); }From DataControl palette Drop addEmpRecord as Method ADF button, In Edit action binding window enter the parameter values which are binded in backing bean.     For Ex:     If the name deptno textfield is binded with "deptno" variable in backing bean, then El Expression Builder pass value as "#{backingbean.deptno.value}"Binding:

    Read the article

  • How do I replicate autohotkey hotstrings in Chrome OS

    - by IThink
    Google gave me a Cr-48 about a month ago. I like it. Its simple and powerful. The temptation to fuss and muss has been removed. However, one of the things I miss a lot is hotstrings that I set up in autohotkey. So for instance typing "asap" will autoexpand to "as soon as possible" no matter what software I am in. I cannot do that in Chrome OS. Google Docs has something under toolspreferencesautomatic substitution but that is only specific to Google Docs. I want to have hotstrings everywhere.

    Read the article

  • PostgresQL on Amazon EBS volume, realistic performance, or move to something more lightweight?

    - by Peck
    Hi, I'm working on a little research project, currently running as an instance on ec2, and I'm hoping to figure out whether I'm going down the right path. We, like a thousand other people, are making use of some of twitters streaming feeds to do gather some data to have fun with and my db seems to be having problems keeping up, and queries take what seems to be a very long time. I'm not a DBA by trade, so I'll just dump some info here and add more if need be. System specs: ec2 xl, 15 gigs of ram ebs: 4 100 gb drives, raid 0. The stream we're getting we're looking at around 10k inserts per minute. 3 main tables, with the users we're tracking somewhere in the neighborhood of 26M rows currently. Is this amount of inserts on this hardware too much to ask out of ebs? Should take a look at some things with less overhead like mongodb?

    Read the article

  • Materialized View vs POJO View based on Objects representing Oracle tables

    - by Zack Macomber
    I have about 12 Oracle tables that represent data that's being integrated from an external system into my web application. This data is going to be used in an informational and comparative manner for the clients using my web application. On one particular page of my web application, I need to combine data from 3 - 5 Oracle tables for display as an HTML table on the page. We are NOT currently using a framework (Apache Struts for instance) and we're not in a position to move this Java web application into one at this moment (I'm trying to get us there...). I am certainly not an architect, but I see one of two ways that I can effectively build this page (I know there are other ways, but these seem like they would be good ones...): 1. Create an Oracle Materialized View that represents what the HTML table should look like and then create a POJO based on the View that I can then integrate into my JSP. 2. Create POJOs that represent the Oracle tables themselves and then create another POJO that is the View used for the HTML table and then integrate that POJO into my JSP. To me, it seems the Materialized View would possibly offer quicker results which is always what we strive for in web applications. But, if I just create 12 POJOs that represent the Oracle tables and then build POJO Views off of those, I have the advantage of keeping all the code in one place for this and possibility for creating any number of different views and reusable components in my web application. Any thoughts on which one might be the better route? Or, maybe you know of an even better one?

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >