Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 271/728 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Oracle's Vision for the Social-Enabled Enterprise - Partner Webcast. September 10th

    - by Richard Lefebvre
    Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Oracle is powering this transformation with the most comprehensive enterprise social platform that lets you: Monitor and engage in social conversations Collect and analyze social data Build and grow brands through social media Integrate enterprisewide social functionality into a single system Create rich social applications Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle’s vision for the social-enabled enterprise. Register now for this Webcast.  - Mon., Sept. 10, 2012 - 10 a.m. PT / 19:00 CET

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Does using a hexacore CPU make sense?

    - by Exa
    I'm currently planning to upgrade my computer system and I want to exchange CPU, board and RAM. I already had a look at some hexacore-CPUs from AMD and would like to know if it makes any sense to use such a CPU with six cores. Is there any software which really uses six cores? Especially in gaming? I'm using this PC mostly for gaming and from time to time for developing. I know that on the dual-core system (2 x 3GHz) I currently use, Visual Studio creates two instances of the compiler, one for each core. Would there be six instances of the compiler on a hexacore system for super fast compiling? Is there any software that uses six cores? Would running two applications cause the usage of more CPUs? (For example two CPUs for a game you're playing while two other CPUs are used for compiling at the same time) I hope someone can point out the benefits of a hexacore system. The OS would be Windows 7 64 Bit and I use the PC for gaming most of the time. (Crysis 2, CoD, stuff like that)

    Read the article

  • How to quantify product work in Resume?

    - by mob1lejunkie
    One of things I do in my Resume is try to quantify the impact my work has had in the particular company I was with at the time. The reason is it shows the value my work had added to the business. Is this what you guys do as well or am I the only one? In my previous job this was easy as I worked on short/medium internal applications and it was fairly easy to measure end result. For example, external consulting company quoted $50,000 for an application Business Services department wanted I completed it in 3 days so I say I saved the company $48,000. I have been in my current job for 3 years but all of it has been on 1 single well established product. About 30% work is maintenance and 70% work is on new modules. I have worked on various modules like API (WCF), Security (2 factor authentication), etc. How should I quantify work on modules? Many thanks.

    Read the article

  • When to use an Array vs When to use a Vector, when dealing with GameObjects?

    - by user32465
    I understand that from other answers, Arrays and Vectors are the best choices. Many on SE claim that Linked Lists and Maps are bad for video game programming. I understand that for the most part, I can use Arrays. However, I don't really understand exactly when to use Vectors over Arrays. Why even use Vectors? Wouldn't it be best if I simply always used an Array, that way I know how much memory my game needs? Specifically my game would only ever load a single "Map" area of tiles, such as Map[100][100], so I could very easily have an array of GameObjectContainer GameObjects[100][100], which would reserve an entire map's worth of possible gameobjects, correct? So why use a Vector instead? Memory is quite large on modern hardware.

    Read the article

  • Using normals in DirectX 10

    - by Dave
    I've got a working OBJ loader that loads vertices, indices, texture coordinates, and normals. As of right now it doesn't process texture coordinates or normals but it stores them in arrays and creates a valid mesh with the vertices and indices. Now I am trying to figure out how can I make the shader use the correct normal in the array for the current vertex if I can't setnormals() to my mesh. If I were to just use an index in my array of normals corresponding to the index in the vertices, how would I retrieve the current index the shader is processing? BTW: I am trying to write a blinn-phong shader technique. Also when I create the input layout and I've added the semantic NORMAL to it, how would I list the multiple semantics in that single parameter? Would I just separate it with a space? PS: If you need to see any code, just let me know.

    Read the article

  • Why is my content database so large?

    - by PeterBrunone
    If your SharePoint site collection hasn't grown, but your content database has, the most likely culprit is versioning.  If a list -- or worse, a library -- has versioning enabled, the default is to keep every single one.  That means that every time someone edits and checks in a document, its storage footprint increases by the size of the document (and probably a little more).The solution?  It could be a bit painful, but you'll need to go back into each library and restrict the number of versions to keep (three is sufficient for most uses, but your needs may vary).  I suggest keeping only major versions as well, since minor versions are really just stopping points on the way to a published document.Of course if you have a real business need to keep all those versions around, then you'll want to look into an archiving solution that will take the old versions out of the content database but still make them available if necessary.

    Read the article

  • Webcast: Oracle's Vision For The Socially-Enabled Enterprise

    - by Michael Hylton
    Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Oracle is powering this transformation with the most comprehensive enterprise social platform that lets you:     Monitor and engage in social conversations     Collect and analyze social data     Build and grow brands through social media     Integrate enterprise-wide social functionality into a single system     Create rich social applications Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle’s vision for the social-enabled enterprise.  Click here to register.

    Read the article

  • Webcast: Oracle's Vision For The Socially-Enabled Enterprise

    - by Michael Hylton
    Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Oracle is powering this transformation with the most comprehensive enterprise social platform that lets you:     Monitor and engage in social conversations     Collect and analyze social data     Build and grow brands through social media     Integrate enterprise-wide social functionality into a single system     Create rich social applications Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle’s vision for the social-enabled enterprise.  Click here to register.

    Read the article

  • Schema.org vs microformats

    - by Tordek
    They both server the same purpose: providing a vocabulary for semantic markup. Schema is recognized and standardized... but microformats are open. Schema exploits microdata, while microformats go on classes. (Of note: microdata means that an element must be of a single itemtype, while microformats allow several classes to apply to the same element. I can markup xFolk+hAtom with classes, but not with microdata.) Is this a black-and-white situation? Google says I can't use both "because it may confuse the parser". What's the consensus on these?

    Read the article

  • Microphone problem in Ubuntu 11.10

    - by Teja
    I am using both Windows 7 and Ubuntu 11.10 on a single system with the same hardware. While I'm making call through Gmail or Skype my voice is audible to others while I'm on Windows 7, but while I'm on Ubuntu 11.10 it's not audible to others, but I am able to hear their voice. But while I'm playing music on Ubuntu along with a call, the others are able to hear the music, but my voice is not audible for them. Please give me a solution for this and tell me where I can change the microphone settings on Ubuntu 11.10.

    Read the article

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Cliché monsters to populate a steampunk fantasy setting dwarven dungeon?

    - by Alexander Gladysh
    I'm looking for a list of cliché monsters for a steampunk computer game (assume one kind or another of casual rogue-like RPG), to populate lower levels of ancient dwarven-built dungeons. Dwarves are a technology/science race in the setting I am aiming for. The world is a low-magic one. I'm stuck after listing various mechanical golems, gigantic spiders (every dungeon must have some of them!), and maybe a mechanical barlog as a megaboss. What would player expect? What are the key cultural references for such setting? I know a couple of games with suitable steampunk dwarves, but none are detailed enough in the underworld monsters area. Please point me in the right direction. (If you have a single funny monster suggestion, please mention it in comments, not in answer. ;-) )

    Read the article

  • What is a light-weight "slideshow" script that could integrate w/ CMS?

    - by aslum
    I'm looking to reduce the footprint of my Strict html 4.01 front page. One possible way is to combine much of the "upcoming events" into a single small box, and have them automagically switch which one is displayed every few seconds. I'm sure there are a bunch of this kind of thing written already, and surely an open source one exists, but I haven't had much luck find one. I'd prefer javascript to jQuery as installing jQuery might not be an option, but if the best-fit script requires jQuery I'd certainly be willing to investigate that route. If it can display content from Wordpress that would be ideal.

    Read the article

  • Ubuntu Variant / Linux Distros which uses least system resources (RAM, CPU)?

    - by elegantonyx
    I have a netbook (an older Asus EEEPC 1005HA) which I want to get rid of Windoze on (I like Windows, but I don't think it works well in a netbook environment). Basically, my question is which Ubuntu variant will use the least RAM and CPU running idle, and/or the same question except when running Firefox and Libreoffice Writer, say. I am also open to suggestions of non-Ubuntu Linux distros, but since this is AskUbuntu I thought the first question would be more appropriate. I have a disk drive which I can attach to the netbook, so it doesn't have to be a Ubuntu Variant / Linux Distro which solely boots from a USB drive. I have at my disposal: DVDs, DVD writer/ disk drive, 4gb flash drive, 8 gb flash drive I was thinking either Lubuntu or Archbang / Crunchbang but I would like some help from more knowledgable people Specs: Can't boot into it right now, but I think I have either Intel Atom N270 @1.60ghz OR Intel Atom N280 @1.66ghz (single core, I think) 2gb RAM 160 GB hard drive

    Read the article

  • How To Delete Your Skype Call and Chat History

    - by Gopinath
    Just like every other modern application, Skype also records all the communications we exchange using it. It records instant messages, calls, file transfers, SMS, etc. and makes it easy to view using the Conversation tab. If you ever feel like getting rid of these history information, then you need to delete them. Skype provides a single click option to clear all the history from you account, but the feature is buried deep under options menu.Really deep!. To clear history follow the menu Tools –> Options, switch to Privacy Settings tab available on the left side, click on Show advanced options button and finally hit the button Clear history. Ah! You are almost done. Just confirm a popup it displays on screen and your history is vanished from your account. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Are there compatibility issues opening Visual Studio Professional projects in Visual Studio Express, and vice versa? [migrated]

    - by theGreenCabbage
    Disclaimer: I have taken a look at the 50+ StackExchange forums to find the right place, and it seems /Programmers/ is the most suitable Exchange for this. If this is the wrong place to ask this, however, please let me know - I will personally delete the thread. I am in the process of downloading a single license for Visual Studio 2013 for my firm of 2-3 developers. One license is approximately $498.00 USD. As a small firm, our funds are short, but since we will be creating commercial software, we decided we will be needing the features of the Professional edition. At the same time, our decision is to use the Express edition for the rest of the two developers. My question is - will there be compatibility issues between Express projects and Professional projects for Visual Studio?

    Read the article

  • Is it relevant to warn about truncating real constants to 32 bits?

    - by zneak
    I'm toying around with LLVM and looking at what it would take to make yet another strongly-typed language, and now that I'm around the syntax, I've noticed that it seems to be a pet peeve of strongly typed language to warn people that their constants won't fit inside a float: // both in Java and C# float foo = 3.2; // error: implicitly truncating a double into a float // or something along these lines Why doesn't this work in Java and C#? I know it's easy to add the f after the 3.2, but is it really doing anything useful? Must I really be that aware that I'm using single-precision reals instead of double-precision reals? Maybe I'm just missing something (which, basically, is why I'm asking). Note that float foo = [const] is not the same thing as float foo = [double variable], where requiring the cast seems normal to me.

    Read the article

  • New Exadata and Exalogic public references

    - by Javier Puerta
    The following are new public references for Exadata and Exalogic: Allegis Accelerates HR Processing for 130,000 Contractors  Oracle customer, Allegis, describes how Oracle Exadata and Oracle Exalogic helped consolidate and optimize critical processes running in Oracle's PeopleSoft.  Hyundai Motor Company Document Cuts Repository Management and Access Times Approximately 85%, Saves More Than US$1 Million in Yearly Printing and Paper Costs The company implemented Oracle Exalogic Elastic Cloud, Oracle Exadata Database Machine, Oracle WebLogic, and Oracle WebCenter Content 11g to ensure high performance and stability for its new document-centralization system  University of Minnesota Reduces Data Center Footprint while Enhancing Performance and Manageability with Oracle Exadata Database Machine   Leading Research Institution Consolidates More Than 200 Databases to Approximately 20 while Maximizing Availability for Thousands of Users SThree Prepares to Triple in Size with a Cloud-Based Architecture and a Consolidated, Stable, and Scalable Global Platform  By consolidating 68 databases into a single Oracle Exadata Database Machine, SThree achieved the stability and scalability it needed to support its growth targets. Further enhancements to the organization’s core systems include a planned upgrade for Siebel Contact Center and improved integration with Oracle Fusion Middleware.

    Read the article

  • I'm having trouble understanding these exercises wording.

    - by KasHKoW
    Exercise 1-20. Write a program detab that replaces tabs in the input with the proper number of blanks to space to the next tab stop. Assume a fixed set of tab stops, say every n columns. Should n be a variable or a symbolic parameter? Exercise 1-21. Write a program entab that replaces strings of blanks by the minimum number of tabs and blanks to achieve the same spacing. Use the same tab stops as for detab. When either a tab or a single blank would suffice to reach a tab stop, which should be given preference? could you paraphrase these for me. thanks

    Read the article

  • Dashboard to aggregate Google Analytics, Facebook, YouTube etc tracking data?

    - by Richard
    I'd like to see as much tracking data as possible about my online presence, in one single dashboard - so views/conversions from Google Analytics data, the performance of my Facebook campaigns via the Insights API, views/clicks from my YouTube campaigns, etc. This could be as simple as a graph with time on the x-axis, and key indicators from each source on the y-axis (conversions from Analytics, likes on Facebook, views on YouTube, etc). The idea is that I can see customer engagement with each source, over time. I can write my own such dashboard easily enough, but I wondered if there was something off-the-shelf that already did this. Apologies if this isn't the right forum for such a question - would appreciate tips for the best place to ask.

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • If all variables are a subset of the superkey, is the database design 5NF? [migrated]

    - by Lukazoid
    I have a table called LogMessages, which has the following columns: Level A numeric value which represents Trace, Debug, Info, Warning, Error or Fatal Time A UTC time Message Foreign key to a Messages table Source Foreign key to a Sources table User Foreign key to a Users table From what I can see, all of these columns are a part of the super key; if any single value differs to an existing row, a new row can be created. My question is, does this design comply to fifth normal form? I am unsure as some groups of data will be repeating, however I don't believe this violates 5NF? (correct me if I'm wrong)

    Read the article

  • Multi Column Block Too Narrow in Chrome

    - by aksarben
    My Web site displays song lyrics in a multi-column format, using CSS3. Both Firefox & MSIE 10+ display the multi-column text perfectly, but Chrome does not. This sample page shows the problem: http://www.hymntime.com/tch/test/html5/html5-multicolumn-test.htm The page uses a media selector, so your Chrome window must be at least 1280 pixels wide to see the effect. In fact, if you make the Chrome window less than 1280 pixels, you'll see the lyrics block change to a single column, of the same overall width. In other words, when Chrome shifts to 1-column to 2-column mode (due to the wider browser window), the lyrics block remains the same width, causing text to be squeezed together. Has anyone else seen this behavior, or know a solution? Is this a Chrome bug, or I am I doing something wrong? I posted this question on a Chrome forum a while back, but got no reply.

    Read the article

  • Need Help Scoping a Server to use for study (MCITP Ent Admin + SharePoint 2010)

    - by AVFamily76
    i need to study for mcitp, but i also need to study for sharepoint 2010 i have a poweredge 1850 with two single-core CPUs + two 73G drives - it kills me on electricity, so don't want to use it, and it won't do VT, but it could be one of three boxes for a lab that's cheap, but will cost a lot on electricity i was thinking . . . OPTION #1 Opteron 4170 HE (50 watt chip), 6-core, only two-bills ($200), but the board's are $250, so that's an $800 box, then get another box to dual-boot Win7/Hyper-V on the cheap...? OPTION #2 Used Quad - but how many VM's that are really banging away could it run at same time? (Server 2008r2, SQL 2008r2, Search Server) OPTION #3 Study from books and just get one box that can run two VM's at same time, even if slowly. the last time i had and used a home lab was five years ago when i had a DC, SQL, Exchange and business app box, that's where i got my server skills was just banging on it for four years, but didn't read any books, so now i have to get certified and know the material, and just am not sure how much attention i should pay to the box i use versus the studying time and reading. sorry it's a subjective question, and am obviously open to all sorts of abuse here, but hope you can tell me also how many VM's i can run at the same time given what they'll be doing (SQL and SharePoint FAST search server are resource hungry) thanks!

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >