Search Results

Search found 39588 results on 1584 pages for 'two spirit'.

Page 217/1584 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Installing linux on OCZ RevoDrive3 x2

    - by user2101712
    First of all, here is the configuration of my computer: Motherboard: Asus H87Plus RAM: Corsair Vengeance 32GB Processor: Intel i7 4770 Drive: OCZ RevoDrive 3 x2 (240 GB) (OCZ Revodrive3 is a PCIe module) I am trying to install the latest version of Ubuntu Desktop (13.10). The problem is that in the UEFI (bios) the drive shows up as a 240 GB drive, but in the Ubuntu installer it shows up as two 120 GB drives. If I install Ubuntu in any of these two drives, it never boots. The screen flickers a few times and comes back to the UEFI menu. I have tried reading up and have come across information that the drive has a "fakeraid", and the solution is to use dmraid. However, when I give the following commands in the terminal (from live CD): # modprobe dm_mod # dmraid -ay it says: no raid disks. And the following command: # ls -la /dev/mapper/ just shows /dev/mapper/control How can I install Ubuntu on my computer? what is the correct method?

    Read the article

  • What diagrams, other than the class diagram and the workflow diagram, are useful for explaining how an application works?

    - by Goran_Mandic
    I am working on a small Delphi project, composed of two units. One unit is for the GUI, and the other for data management, file parsing, list iterating and so on.. I've already made a class diagram, and my workflow looks like hell- it's too complex, even for anyone to read. I've considered making a dataflow diagram, but it would be even more complex. A use case diagram wouldn't be of use either. Am I missing some diagram type which could somehow represent the relationship between my two units?

    Read the article

  • Is there a correlation between complexity and reachability?

    - by Saladin Akara
    I've been studying cyclomatic complexity (McCabe) and reachability of software at uni recently. Today my lecturer said that there's no correlation between the two metrics, but is this really the case? I'd think there would definitely be some correlation, as less complex programs (from the scant few we've looked at) seem to have 'better' results in terms of reachability. Does anyone know of any attempt to look at the two metrics together, and if not, what would be a good place to find data on both complexity and reachability for a large(ish) number of programs? (As clarification, this isn't a homework question. Also, if I've put this in the wrong place, let me know.)

    Read the article

  • Public versus private inheritance when some of the parent's methods need to be exposed?

    - by Vorac
    Public inheritance means that all fields from the base class retain their declared visibility, while private means that they are forced to 'private' within the derived class's scope. What should be done if some of the parent's members (say, methods) need to be publicly exposed? I can think of two solution. Public inheritance somewhat breaks encapsulation. Furthermore, when you need to find out where is the method foo() defined, one needs to look at a chain of base classes. Private inheritance solves these problems, but introduces burden to write wrappers (more text). Which might be a good thing in the line of verbosity, but makes changes of interfaces incredibly cumbersome. What considerations am I missing? What constraints on the type of project are important? How to choose between the two (I am not even mentioning 'protected')? Note that I am targeting non-virtual methods. There isn't such a discussion for virtual methods (or is there).

    Read the article

  • How to perform Cross Join with Linq

    - by berthin
    Cross join consists to perform a Cartesian product of two sets or sequences. The following example shows a simple Cartesian product of the sets A and B: A (a1, a2) B (b1, b2) => C (a1 b1,            a1 b2,            a2 b1,            a2, b2 ) is the Cartesian product's result. Linq to Sql allows using Cross join operations. Cross join is not equijoin, means that no predicate expression of equality in the Join clause of the query. To define a cross join query, you can use multiple from clauses. Note that there's no explicit operator for the cross join. In the following example, the query must join a sequence of Product with a sequence of Pricing Rules: 1: //Fill the data source 2: var products = new List<Product> 3: { 4: new Product{ProductID="P01",ProductName="Amaryl"}, 5: new Product {ProductID="P02", ProductName="acetaminophen"} 6: }; 7:  8: var pricingRules = new List<PricingRule> 9: { 10: new PricingRule {RuleID="R_1", RuleType="Free goods"}, 11: new PricingRule {RuleID="R_2", RuleType="Discount"}, 12: new PricingRule {RuleID="R_3", RuleType="Discount"} 13: }; 14: 15: //cross join query 16: var crossJoin = from p in products 17: from r in pricingRules 18: select new { ProductID = p.ProductID, RuleID = r.RuleID };   Below the definition of the two entities using in the above example.   1: public class Product 2: { 3: public string ProductID { get; set; } 4: public string ProductName { get; set; } 5: } 1: public class PricingRule 2: { 3: public string RuleID { get; set; } 4: public string RuleType { get; set; } 5: }   Doing this: 1: foreach (var result in crossJoin) 2: { 3: Console.WriteLine("({0} , {1})", result.ProductID, result.RuleID); 4: }   The output should be similar on this:   ( P01   -    R_1 )   ( P01   -    R_2 )   ( P01   -    R_3 )   ( P02   -    R_1 )   ( P02   -    R_2 )   ( P02   -    R_3) Conclusion Cross join operation is useful when performing a Cartesian product of two sequences object. However, it can produce very large result sets that may caused a problem of performance. So use with precautions :)

    Read the article

  • Separating merged array of arithmetic and geometric series

    - by user1814037
    Given an array of positive integers in increasing order. Separate them in two series, an arithmetic sequence and geometric sequence. The given array is such that a solution do exist. The union of numbers of the two sequence must be the given array. Both series can have common elements i.e. series need not to be disjoint. The ratio of the geometric series can be fractional. Example: Given series : 2,4,6,8,10,12,25 AP: 2,4,6,8,10,12 GP: 4,10,25 I tried taking few examples but could not reach a general way. Even tried some graph implementation by introducing edges if they follow a particular sequence but could not reach solution.

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • New Job Over Budget

    - by moneylotion
    I recently started a new job as a contract developer, and my non-developer boss of about two weeks ago gave me the task to re-create an app from another language and developer that he will reuse with multiple clients (replacing the front-end), that I estimated would take longer than his estimate of 12 hours. Two weeks later, I'm 230% over-budget. I admit this was my 2nd web app, I had been a wordpress developer in the past, so I am somewhat new to code igniter, but not shabby php by any means. My boss hired me knowing this and I was clear that it would take longer than his 12 hours. He's seen me in the office on task for 2 weeks, he should be somewhat prepared for this bill. Do I expect the full amount of hours, or do I filter against how much of learned? Can I bill for research as a developer?

    Read the article

  • Help file formats - MSHA files v CHM files

    - by TATWORTH
    Recently I was tasked with producing a help file from a C#/WPF/Crystal Reports application using Sandcastle. I have previously blogged about the problems in doing that and the change that is going into the next version of Sandcastle that allows the vagaries of Crystal (this missing BusinessObjects.Licensing.KeycodeDecoder) to be handled. At http://social.msdn.microsoft.com/Forums/en-US/devdocs/thread/0b110502-f5bb-4c56-96a5-4347a2a7a68a/, I describe how I tried each of the formats. Two of the formats could not be built and the error messages were not exactly helpful as to the cause. These two formats turned out to be obsolete. The MSHA format worked but was not suitable for a standalone application, so that left me with the older CHM format. I therefore asked on that thread "will the HTML Help 1 (CHM) format continue to be supported for the foreseeable future?".Rob Chandler, MVP in help systems, gave a very helpful answer, to the effect that there is not yet a replacement for the CHM format.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • URGENT: Patches Needed to Prevent Data Corruption in Oracle Payments

    - by LuciaC
    Development are seeing a number of datafix bugs being logged related to PPR committing data in Payments (IBY) and missing corresponding payments in Payables.  These bugs have been investigated and fixed, however customers need to proactively apply these fixes to prevent data corruption. There are two root cause patches available for this case of partial data commit.  It is critical that all R12/12.1 Payments customers apply the following two patches ASAP: a) Patch 11699958: R12: Error during PPR Leads to Incomplete Data Commit and Inconsistent Status (Doc ID 1338425.1)b) Patches 15867522: Confirmed PPR Batches Show Payment Initiated - Data Exist Only in IBY Tables (Doc ID 1506611.1)

    Read the article

  • Windows 7 can't boot with Ubuntu on different hard drive

    - by dellphi
    I use a dual boot with two hard disks and two OS is Ubuntu 10.04 and Windows 7. Windows 7 installed on the first disk, first partition. Grub is installed on a second hard disk MBR, and Ubuntu installed on an extended partition on a second hard drive. When I select Windows 7 on the Grub menu, the HDD lamp lights up briefly and then black screen on the monitor, with the status of the keyboard is still functioning. Until now (with the default boot from first HDD), I have to press F12 to get into the Grub to run Linux on a second HDD. output of fdisk -l grub.cfg. I want to retain Grub to remain on the second HDD, and Windows 7 could choose from the menu provided by Grub. But I do not get how, I hope anyone can help.

    Read the article

  • Nautilus left pane does not expand

    - by dn.usenet
    I would prefer the left pane of Nautilus to behave like Windows file manager. It should have expandable/collapsible trees, and if I have /home/mydir-1, /home/mydir-2, I should be able to see them both in the left pane. When I click on one of them, the files in that dir should show in the right pane. If Nautilus can't do it, please suggest a better file-manager which does. I would rather not open 3 panes in Nautilus to do what two panes do just fine in Windows File Manager. Secondly how can I open two instances of Nautilus? And if it isn't possible with Nautilus, could it be done with some other file manager?

    Read the article

  • Scuttlebutt Reconciliation from "Efficient Reconciliation and Flow Control for Anti-Entropy Protocols"

    - by Maus
    This question might be more suited to math.stackexchange.com, but here goes: Their Version Reconciliation takes two parts-- first the exchange of digests, and then an exchange of updates. I'll first paraphrase the paper's description of each step. To exchange digests, two peers send one another a set of pairs-- (peer, max_version) for each peer in the network, and then each one responds with a set of deltas. The deltas look like: (peer, key, value, version), for all tuples for which peer's state maps the key to the given value and version, and the version number is greater than the maximum version number peer has seen. This seems to require that each node remember the state of each other node, and the highest version number and ID each node has seen. Question Why must we iterate through all peers to exchange information between p and q?

    Read the article

  • Fake RAID (dmraid) not seeing new SATA drives

    - by rausch
    I have three drives in my machine, one SSD with 32GB and two 1TB drives, attached to an Intel 82801JI (ICH10) SATA AHCI Controller. The problem is, that I can access only one of the 1TB drives when the other one is not plugged in. When it is plugged in I see the drives as sda and sdb, but there seem to be no partitions. Looking at these drives with cfdisk, the partitions are there, though. Both of the 1TB drives are carrying a partition, being part of a software RAID1, created with mdadm. Before I threw the SSD into the mix, the other two have been working fine. Any hints?

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • Why are Back In Time snapshots so large?

    - by Chethan S.
    I just backed up the contents of my home partition onto my external hard drive using Back In Time. I browsed to the backed up contents in the external drive and under properties it showed me the size as 9.6 GB. As I read that in next snapshots I create, Back In Time does not backup everything but creates hard links for older contents and saves newer contents, I wanted to test it. So I copied two small files into my home partition and ran 'Take Snapshot' again. The operation completed within a minute - first it checked previous snapshot, assessed the changes, detected two new files and synced them. After this when I browsed to the backed up contents, I was surprised to see the newer and older backup taking up 9.6 GB each. Isn't this a waste of hard drive space? Or did I interpret something wrongly?

    Read the article

  • WF4 &ndash; It has suddenly got interesting

    - by MarkPearl
    I was at Teched two years ago when one of the Microsoft leads said there were three new area’s that we needed to pay attention to for development, namely: WPF WCF WF At the time I was just getting back into development work and had a look at WPF and immediately was sold on the approach. While I haven’t been to involved with WCF directly, I know that some of the guys in my dev team have been and that it too was a success. So what happened to WF? It seemed clunky, and all the demo’s that I saw of it left me scratching my head wondering how if it was going to be useful. Fast forward two years later and while I have had a brief look at WF4, I can immediately see areas where we can use the technology. Does that mean that I think WF4 is the bees knees? I don’t know enough about it yet to really have a solid opinion, but I do think that it is finally going in the right direction. A good introduction to WF4 can be found here.

    Read the article

  • June 2013 release of SSDT contains a minor bug that you should be aware of

    - by jamiet
    I have discovered what seems, to me, like a bug in the June 2013 release of SSDT and given the problems that it created yesterday on my current gig I thought it prudent to write this blog post to inform people of it. I’ve built a very simple SSDT project to reproduce the problem that has just two tables, [Table1] and [Table2], and also a procedure [Procedure1]: The two tables have exactly the same definition, both a have a single column called [Id] of type integer. CREATE TABLE [dbo].[Table1] (     [Id] INT NOT NULL PRIMARY KEY ) My stored procedure simply joins the two together, orders them by the column used in the join predicate, and returns the results: CREATE PROCEDURE [dbo].[Procedure1] AS     SELECT t1.*     FROM    Table1 t1     INNER JOIN Table2 t2         ON    t1.Id = t2.Id     ORDER BY Id Now if I create those three objects manually and then execute the stored procedure, it works fine: So we know that the code works. Unfortunately, SSDT thinks that there is an error here: The text of that error is: Procedure: [dbo].[Procedure1] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Table1].[Id] or [dbo].[Table2].[Id]. Its complaining that the [Id] field in the ORDER BY clause is ambiguous. Now you may well be thinking at this point “OK, just stick a table alias into the ORDER BY predicate and everything will be fine!” Well that’s true, but there’s a bigger problem here. One of the developers at my current client installed this drop of SSDT and all of a sudden all the builds started failing on his machine – he had errors left right and centre because, as it transpires, we have a fair bit of code that exhibits this scenario.  Worse, previous installations of SSDT do not flag this code as erroneous and therein lies the rub. We immediately had a mass panic where we had to run around the department to our developers (of which there are many) ensuring that none of them should upgrade their SSDT installation if they wanted to carry on being productive for the rest of the day. Also bear in mind that as soon as a new drop of SSDT comes out then the previous version is instantly unavailable so rolling back is going to be impossible unless you have created an administrative install of SSDT for that previous version. Just thought you should know! In the grand schema of things this isn’t a big deal as the bug can be worked around with a simple code modification but forewarned is forearmed so they say! Last thing to say, if you want to know which version of SSDT you are running check my blog post Which version of SSDT Database Projects do I have installed? @Jamiet

    Read the article

  • Forbes.com: Oracle's message is Loud & Clear – “We’ve Got The Cloud”

    - by Cinzia Mascanzoni
    In a two-part series on Oracle's cloud strategy, Bob Evans reports on the October 4 meeting where Wall Street analysts questioned Mark Hurd and Safra Catz about the company's positioning for the shift to cloud computing. Access the article and read the Q&A exchanges between the analysts and Hurd and Catz. And then check out Bob's related Forbes.com piece "The Dumbest Idea of 2013," in response to the preposterous chatter that Larry Ellison and Oracle don't "get" the cloud. His powerful six-point argument unravels our competitors' spin. Go to the two-part strategy article. Read the "Dumbest Idea." Follow Bob on Twitter as he frequently updates his Oracle Voice column on Forbes.com.

    Read the article

  • At Symbol not working for apt get proxy authentication Ubuntu 11.10

    - by Shivhari
    I Have tried two things in three places to see if it works please do help me out. Two methords: 1) replacing @ with %40 2) replacing @ with \@ Three places: 1) export with the .bashrc file 2) editing /etc/apt/apt.conf and setting acquires there 3) using gconf editor and setting the values in /system/http_proxy and setting authentication name and password and checking the use_authentication checkbox. still there is no success and i still get 407 error when trying wget or apt-get update. please do help me, been stuck with this for three hours now. also, i read somewhere that creating a file in /etc/apt/apt.conf.d and then creating a 01proxy file with acquire might work. I tried that also, but it doesnt work. Please help.

    Read the article

  • ubuntu 12.10 slow application load upon restart

    - by Adam
    new ubuntu user. after boot (boot is quite fast), opening applications such as firefox, libra, system settings, takes close to 10 seconds to open. after the application is loaded in RAM, the application opens fine. the system feels snappy, quick. when i have two FIREFOX open, unity is snappy in showing me the two windows side by side. but upon bootup, loading the application takes 10 seconds. there is not much hdd activity. fresh install. its the same for system settings and browsing system settings. when system settings is opened for the first time (10 second), and i click for example Color, it will take quite long to open the color settings. hardware: i3 4gb ram radeon 5770 250gb sata gigabyte h55m motherboard i am using ubuntu straight out of the box, no propriety drivers. 64 bit. i have reinstalled OS, and still the same.

    Read the article

  • Microsoft Surface - my take

    - by Sahil Malik
    SharePoint 2010 Training: more information Okay so the news has sunk in. Microsoft talked about two tablets, one that runs WinRT, the other than runs full Win8 pro. I thought I’d compare the two, and put on my clairvoyance hat to predict where this will go. In fairness I think, you can compare the WinRT Surface to iPAD, and Win8Pro Surface to Macbook Air. So here is a bang by bang comparison, WinRT Surface iPad Verdict 676 grams 652 grams Equal 9.3mm 9.4mm Equal Read full article ....

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >