Search Results

Search found 6068 results on 243 pages for 'total anime immersion'.

Page 127/243 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Oracle Systems and Solutions at CloudExpo NY 2012

    - by ferhat
    Oracle's Larry Ellison and Mark Hurd just unveiled industy's broadest cloud strategy on June 6, with services based on industry standards, with 100+ enterprise applications live in the Cloud today!  The broadest strategy to support your journey along the cloud in any path chose, at any pace your business require and need. This is great assurance for your journey into the clouds as it is, at the same time, quite a temptation, don't you think? We will be at the Cloud Expo Conference to take place June 11-14 in New York. Oracle is Platinum Plus sponsor of 10th International  Cloud Computing Conference & Expo 2012 East. Oracle is also glad to offer complimentary VIP Gold Passes to the conference. We wish everyone a great and productive time with all  the fellow cloudsters.  We, the systems solutions group at Oracle, have prepared Oracle Optimized Solution for Enterprise Cloud Infrastructure to help you start your Infrastructure-as-a-Service with ease, confidence, speed, and savings.  In this solution we are now bringing together the power of Oracle Solaris and SPARC T4 servers. We will be at the Cloud Bootcamp on Wednesday June 13th discussing how this combination can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. We will also be at the Expo floor #511 throughout the Cloud Expo conference. Join us for the keynote, general session, and technical sessions with Oracle: Keynote Session: A Pragmatic Journey to the Cloud , Tuesday, June 12, 2012 General Session: Oracle Cloud - An Enterprise Cloud for Business-Critical Applications , Monday, June 11, 2012 Conference Session: Accelerate Enterprise Cloud Deployment and Gain Total Cloud Control, Monday, June 11, 2012 Conference Session: The Java EE 7 Platform: Developing for the Cloud, Monday, June 11, 2012 Conference Session: Integrating Big Data into Your Data Center: A Big Data Reference Architecture, Monday, June 11, 2012 Conference Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder, Tuesday, June 12, 2012 Conference Session: Building a Private, Public, or Hybrid Cloud? Simplify Your Cloud with Oracle’s Complete Cloud Solution,Tuesday, June 12, 2012 Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, Wednesday, June 13, 2012

    Read the article

  • Oracle at Work videó: Banca Transilvania, Exadata és Exalogic, alkalmazások és adattárház

    - by user645740
    Az ORACLE AT WORK videók sorában most a Banca Transilvania romániai bank felso vezetoi osztják meg gyakorlati tapasztalataikat az Exadata Database Machine és az Exalogic megoldások banki muködésérol. Videó link: Video Case Study with Banca Transilvania.    A kolozsvári székhelyu Banca Transilvania a harmadik legnagyobb bank Romániában, 1,5 millió aktív ügyfelet kezel, a bank növekszik. Amikor 1999-ben megjelent a II. generációs Exadata V2-es sorozat, ebbol a Banca Transilvania volt az elso banki vásárló a világon. Eloször az adattárházukat helyezték át Exadatára, hatalmas teljesítmény ugrást tapasztaltak, amit az üzlet ki is tudott használni, több riport, extrém jó válaszidokkel, gyorsabb batch futások. A cél a banki architektúra Exadata és Exalogic rendszerre történo konszolidálása. A videóban az Exadata + Exalogic megoldásuk elonyeirol a Banca Transilvania vezetoi beszélnek: Robert C. Rekkers, CEO;  Leontin Toderici, COO;  Marius Ursuti, IT Director. az alkalmazásaikat Oracle-ön konszolidálják kártya rendszer új core-banking rendszer: FLEXCUBE is az Exadatán fog muködni, ez segíti a bankot a növekedésben, a növekvo ügyfélszám, tranzakciók kezelésében Siebel CRM tesztelik az Exa architektúrán az Oracle E-Business Suite alkalmazásokat is legjobb teljesítmény Oracle adatbázisokra, kisebb válaszidok, adattárház és Oracle OLAP: költségcsökkenés, 30-szor gyorsabb muködés Exalogic: alkalmazásszerverek konszolidációja WebLogic szerveren a bank tesztjei is azt mutatják, hogy az Exalogic elonyei még jobban kidomborodnak az Exadata használatával együtt a gépek beállítását követoen 24 órán belül már kezdhetik a munkát, az elore telepített szoftverekkel, a gyárilag alaposan tesztelt és hangolt gépekkel egyszerubb architektúra, ennek menedzsmentje is kisebb költségu általánosan minden rendszer bevezetési ideje lecsökken az Exa architektúrán lehetové teszi új termékek és szolgáltatások gyorsabb bevezetését költségcsökkentés, teljes költség, Total Cost csökkent kiemelték az Oracle Corporation  felkészült csapatának rugalmasságát az Oracle folyamatos fejlesztése, innovációja, supportja Videó link: Video Case Study with Banca Transilvania.

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

  • Scientists Demonstrate First-Person Shooter Games Improve Vision

    - by Jason Fitzpatrick
    Need an excuse to log a few more hours playing Call of Duty or Medal of Honor? Scientists demonstrated improved vision in test subjects after daily doses of first-person shooter games. Scientists at McMaster University took subjects who, as the result of surgery correcting congenital cataracts, had less than 20/20 vision. Subjects played Medal of Honor for a total of 40 hours over the course of 4 weeks before having their vision retested. The results? The CBC reports: The participants found improvements in detail, perception of motion and in low contrast settings. In essence, players could now read about one to one-and-a-half more lines on an optometrist’s eye chart. “We were thrilled,” Lewis said. “It’s very exciting to open up a new world of hope for these people.” How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • The biggest ADF conference "down under"

    - by Chris Muir
    While Oracle Open World is the place to be for ADF presentations, for Aussies living in Perth, San Francisco is a tad far away (believe me from experience, the 23hrs flight from PER-SYD-SFO is tedious).  That's why I'm very excited to see that the Australian Oracle User Group at this year's Perth conference is running its largest set of ADF presentation to date: 5! Okay, it doesn't compare to the 60 ADF sessions at OOW, but it's a small conference of around 300 people that runs for 2 days with 54 sessions total, not 40000 people that runs for 5 days with 1900+ sessions, so I think that's a good effort for a conference that's at the end of the earth! What's even better about this year's conference, is the AUSOUG conference is moving away from just consultants and Oracle staff presenting, but will also include customers presenting on ADF too.  This again proves Perth is a little ADF hotspot, which puts a tear to an ADF product manager's eye let me tell you ;-) The ADF sessions will include: Kevin Payne - JWH Group - ADF Mobile Application Development Matthew Carrigy - Department of Finance Western Australia - The times, they are a-changin’ - An Oracle Forms to JDeveloper ADF  Case Study Penny Cookson & Chris Noonan - Sage Computing Services - Impress your bosses with JDeveloper ADF dashboards on their iPads ...oh and... Chris Muir - Oracle Corporation - Speed-Dating Oracle JDeveloper 12c and Oracle ADF New Features  Chris Muir - Oracle Corporation - Develop Mobile Apps for Smart Devices: Converging Web and Native Applications You can check out the conference schedule here.  I hope you'll support these ADF presenters by attending the AUSOUG Perth conference, I look forward to seeing you there.

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • Oracle(R) Buys Pre-Paid Software Assets From eServGlobal

    - by Paulo Folgado
    Oracle to Deliver Scalable Carrier-Grade Pre-Paid Solution Based on Open, Flexible IT-Based Platform News Facts ·        Oracle has agreed to acquire certain pre-paid assets of eServGlobal, a provider of advanced IT-based, pre-paid charging solutions for the communications industry. ·        eServGlobal's Universal Service Platform (USP) includes a pre-paid charging application, a network-services platform and a messaging gateway. The ChargingMax, NumberMax, uVOMS, MessageMax, PromoMax Express and Social Relationship Management software currently supports more than 25 tier-one customers including the world's largest IT-based installation of pre-paid services. ·        The combination of Oracle Communications Billing and Revenue Management and the USP applications is expected to accelerate the shift from network- to IT-based pre-paid systems by providing the first convergent, open IT-based platform from a leading business software and hardware systems company. ·        Customers are expected to benefit from traditional carrier-grade, pre-paid service authorization with IT-grade flexibility that supports any service or network, is easier to deploy and maintain and delivers an overall lower total cost of ownership. ·        The transaction is expected to close in the second half of this year. Supporting Quote ·        "The majority of mobile phone users worldwide use pre-paid plans, and that number is growing exponentially. Oracle Communications applications combined with the pre-paid software assets from eServGlobal will provide our customers with highly available and scalable carrier-grade, pre-paid software on an open, convergent platform. This will enable our customers to deliver traditional pre-paid voice services and easily introduce hybrid pre-paid and post-paid plans with targeted pricing, promotions and service bundles that include voice, data and network services," said Liam Maxwell, vice president of products, Oracle Communications. Supporting Resources About Oracle and eServGlobal USP General Presentation FAQ

    Read the article

  • How Many Google +1's Does a Website need in order for Google WebMaster's Tools to Show Characteristics

    - by Asaph
    I have added the Google +1 Button to my website and discovered the new Social Activity section in Google WebMaster's Tools. Apparently, one of the interesting things you can learn about your audience is demographic data. But in GWT, the Social Activity Audience section for my site (currently 82 +1's), says the following: Your site doesn’t have enough +1's yet to show characteristics But I'm not sure how many +1's is enough. Google's official help page for the Audience section offers little insight: The Audience page displays information about people who have +1'd your pages, including the total number of unique users, their location, and their age and gender. All information is anonymized; Google doesn't share personal information about people who have +1’d your pages. To protect privacy, Google won't display age, gender, or location data unless a certain minimum number of people have +1'd your content. But what is that "certain minimum number"? I've tried Googling this but all I could find to date was this page which doesn't answer the question. So how many +1's does a site need before GWT will show me audience demographic characteristics?

    Read the article

  • Styles of games that work at low-resolution

    - by Brendan Long
    I'm taking a class on compilers, and the goal is to write a compiler for Meggy Jr devices (Arduino). The goal is just to make a simple compilers with loops and variables and stuff. Obviously, that's lame, so the "real goal" is to make an impressive game on the device. The problem is that it only has 64 pixels to work with (technically 72, but the top 8 are single-color and not part of the main display, so they're really only useful for displaying things like money). My problem is thinking of something to do on a device that small. It doesn't really matter if it's original, but it can't be something that's already available. My first idea was "snake", but that comes with the SDK. Same with a side-scrolling shooter. Remaining ideas include a tower defense game (hard to write, hard to control), an RPG (same), tetris (lame).. The problem is that all of the games I like require a high-resolution screen because they have a lot of text. Even a really simple game like nethack would be hard because each creature would be a single color. tl;dr What styles of games require a. No text; and b. Few enough objects that representing them each with a single color is acceptable? EDIT: To clarify, the display is 8x8 for a total of 64 pixels, not 64x64.

    Read the article

  • Tuning B2B Server Engine Threads in SOA Suite 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g has a number of parameters that can be tweaked to tune the engine for handling high volumes of messages. These parameters are also known as B2B server properties and managed via the EM console.  This note highlights one aspect of the tuning exercise and describes the different threads, that can be configured to tune the performance of a B2B server. Symptoms The most common indicator of a B2B engine in need of a tuning is reflected in the constant build-up of messages in an internal JMS queue within the B2B server. It is called B2B_EVENT_QUEUE and can be monitored via the Weblogic server console. Whenever such a behaviour is seen, it usually results in general degradation of performance. Remedy There could be many contributing factors behind a B2B server's degradation of performance. However, one of the first places to tune the server from the out-of-the-box, default configuration is to change the number of internal engine threads allocated within the B2B server. Usually the default configuration for the B2B server engine threads is not suitable for high-volume of messaging loads. So, it is necessary to increase the counts for 3 types of such threads, by specifying the appropriate B2B server properties via the EM console, namely, Inbound - b2b.inboundThreadCount Outbound - b2b.outboundThreadCount Default - b2b.defaultThreadCount The function of these threads are fairly self-explanatory. In other words, the inbound threads process the inbound messages that are coming into the B2B server from an external endpoint. Similarly, the outbound threads processes the messages that are sent out from the B2B server. The default threads are responsible for certain B2B server-specific special tasks. In case the inbound and outbound thread counts are not specified, the default thread count also dictates the total number of inbound and outbound threads. As found in any tuning exercise, the optimisation of these threads is usually reached via an iterative process. The best working combination of the thread counts are directly related to the system infrastructure, traffic load and several other environmental factors.

    Read the article

  • Cannot install openjdk on Hardy Heron

    - by infaustus
    I know that Hardy Heron is very old but don't ask why Hardy... I've tried root@vz10931:/etc/apt# apt-get install openjdk-6-jre Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-6-jre: Depends: libasound2 (> 1.0.14) but it is not going to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libxtst6 but it is not going to be installed Depends: openjdk-6-jre-headless (>= 6b18-1.8.3-0ubuntu1~8.04.2) but it is not going to be installed vim: Depends: vim-common (= 1:7.1-138+1ubuntu3.1) but 2:7.3.154+hg~74503f6ee649-2ubuntu3 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). My sources.list deb http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse And root@vz10931:/etc/apt# ls -l sources.list.d/ total 0 Please help. When I've tried apt-get install -f I had install new system because everything crashed. Edit: I checked that i have openjdk installed root@vz10931:/var/www/mailer# dpkg --list | grep java iU sun-java6-bin 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar iU sun-java6-jdk 6.24-1build0.8.04.1 Sun Java(TM) Development Kit (JDK) 6 iU sun-java6-jre 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar but when i am trying to start java file: java -jar program.jar error appear -bash: java: command not found

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • GParted in UBUNTU shows entire disk as UNALLOCATED SPACE

    - by msPeachy
    Good day to everyone. I hope someone can help me with my problem. I have a dual boot Windows and Ubuntu system. I recently encountered an hd0 out of disk error and wasn't able to boot Ubuntu. So I booted into Windows, after 2 to 3 times of booting and rebooting Windows, I tried booting Ubuntu but still I get the hd0 out of disk error. I decided to run Ubuntu from LIVEUSB to try to fix my Ubuntu partition using GParted, but when I run GParted, it shows my entire disk as UNALLOCATED SPACE! The strange thing is that Nautilus still shows and mounts my partitions. Also every time I boot into Windows , my partitions exists and I am able to read and write to them. I have no idea what is wrong. Please help! I can't stand using Windows since most of the tools I use are in Ubuntu. I don't mind reinstalling Ubuntu. In fact I already tried reinstalling using the LIVEUSB but I wasn't able to, since GParted or the Ubuntu installer itself does not recognized my partitions and shows the entire disk as unallocated space. I am currently running Ubuntu from LIVEUSB. Here's the outpuf of sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb30ab30a Device Boot Start End Blocks Id System /dev/sda1 * 2048 104869887 52433920 83 Linux /dev/sda2 104869888 105074687 102400 7 HPFS/NTFS/exFAT /dev/sda3 105074688 156149759 25537536 7 HPFS/NTFS/exFAT /dev/sda4 156151800 625153409 234500805 f W95 Ext'd (LBA) /dev/sda5 156151808 169156591 6502392 82 Linux swap / Solaris /dev/sda6 169158656 294991871 62916608 7 HPFS/NTFS/exFAT /dev/sda7 294993920 471037944 88022012+ 7 HPFS/NTFS/exFAT /dev/sda8 471041928 625121152 77039612+ 7 HPFS/NTFS/exFAT When I run, sudo parted -l, I got this error message: ubuntu@ubuntu:~$ sudo parted -l Error: Can't have a partition outside the disk!

    Read the article

  • Global vs. Local Monthly Searches in Adwords keyword tool

    - by Gregory
    I'm trying to learn how to use a keyword tool in Adwords. Here's what I entered: Country- Russia Language-Russian Desktop and laptop devices And the keyword was ???? ? ??????? (tours to Israel in Russian Cyrillic letters) . As a broad match type... Now... the results that I got were: Global monthly: 60,500 Local monthly: 40,500 If I got it right..."Global monthly" means in this context : worldwide average monthly searches for this search term in ANY language in any Google search site (google.ru, google.com.ua, google.com, google.fr etc.). It's all nice, BUT... Then I made an query for tours to Israel in English in the US...And I got: Global monthly: 60,500 Local monthly: 27,100 That doesn't make any sense to me though! How come the total sum (the global) is actually a smaller number than a combined sum of just TWO countries??? (27,100+40,500=67,60060,500) By "any language" they mean a translation of the term into ANY possible language???Or maybe by "language" Google means the language of searchers' operating system? or their browsers' language?

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Ubuntu 12.04 LTS 32bit does not detect 4Gb ram

    - by David
    I have recently installed 4Gb of ram for an existing 12.04 32bit Ubuntu. It's not being recognised, only 3.2Gb is showing, See: administrator@Root2:~$ free total used free shared buffers cached Mem: 3355256 1251112 2104144 0 48664 391972 -/+ buffers/cache: 810476 2544780 System is PAE capable, See: administrator@Root2:~$ grep --color=always -i PAE /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts The system us fully patched and tried to run manual PAE upgrade, See: administrator@Root2:~$ sudo apt-get install linux-generic-pae linux-headers-generic-pae [sudo] password for administrator: Reading package lists... Done Building dependency tree Reading state information... Done linux-generic-pae is already the newest version. linux-headers-generic-pae is already the newest version. The following packages were automatically installed and are no longer required: language-pack-zh-hans language-pack-kde-en language-pack-kde-zh-hans language-pack-kde-en-base kde-l10n-engb kde-l10n-zhcn language-pack-zh-hans-base firefox-locale-zh-hans language-pack-kde-zh-hans-base Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I am not sure what else to try to recognise the full physical memory installed other than loading 64bit. Any thoughts? Thanks! output of uname -r administrator@Root2:~$ uname -r 3.2.0-24-generic-pae

    Read the article

  • Algorithm to figure out appointment times?

    - by Rachel
    I have a weird situation where a client would like a script that automatically sets up thousands of appointments over several days. The tricky part is the appointments are for a variety of US time zones, and I need to take the consumer's local time zone into account when generating appointment dates and times for each record. Appointment Rules: Appointments should be set from 8AM to 8PM Eastern Standard Time, with breaks from 12P-2P and 4P-6P. This leaves a total of 8 hours per day available for setting appointments. Appointments should be scheduled 5 minutes apart. 8 hours of 5-minute intervals means 96 appointments per day. There will be 5 users at a time handling appointments. 96 appointments per day multiplied by 5 users equals 480, so the maximum number of appointments that can be set per day is 480. Now the tricky requirement: Appointments are restricted to 8am to 8pm in the consumer's local time zone. This means that the earliest time allowed for each appointment is different depending on the consumer's time zone: Eastern: 8A Central: 9A Mountain: 10A Pacific: 11A Alaska: 12P Hawaii or Undefined: 2P Arizona: 10A or 11A based on current Daylight Savings Time Assuming a data set can be several thousand records, and each record will contain a timezone value, is there an algorithm I could use to determine a Date and Time for every record that matches the rules above?

    Read the article

  • How can I find the shortest path between two subgraphs of a larger graph?

    - by Pops
    I'm working with a weighted, undirected multigraph (loops not permitted; most node connections have multiplicity 1; a few node connections have multiplicity 2). I need to find the shortest path between two subgraphs of this graph that do not overlap with each other. There are no other restrictions on which nodes should be used as start/end points. Edges can be selectively removed from the graph at certain times (as explained in my previous question) so it's possible that for two given subgraphs, there might not be any way to connect them. I'm pretty sure I've heard of an algorithm for this before, but I can't remember what it's called, and my Google searches for strings like "shortest path between subgraphs" haven't helped. Can someone suggest a more efficient way to do this than comparing shortest paths between all nodes in one subgraph with all nodes in the other subgraph? Or at least tell me the name of the algorithm so I can look it up myself? For example, if I have the graph below, the nodes circled in red might be one subgraph and the nodes circled in blue might be another. The edges would all have positive integer weights, although they're not shown in the image. I'd want to find whatever path has the shortest total cost as long as it starts at a red node and ends at a blue node. I believe this means the specific node positions and edge weights cannot be ignored. (This is just an example graph I grabbed off Wikimedia and drew on, not my actual problem.)

    Read the article

  • mdadm starts resync on every boot

    - by Anteru
    Since a few days (and I'm positive it started shortly before I updated my server from 13.04-13.10) my mdadm is resyncing on every boot. In the syslog, I get the following output [ 0.809256] md: linear personality registered for level -1 [ 0.811412] md: multipath personality registered for level -4 [ 0.813153] md: raid0 personality registered for level 0 [ 0.815201] md: raid1 personality registered for level 1 [ 1.101517] md: raid6 personality registered for level 6 [ 1.101520] md: raid5 personality registered for level 5 [ 1.101522] md: raid4 personality registered for level 4 [ 1.106825] md: raid10 personality registered for level 10 [ 1.935882] md: bind<sdc1> [ 1.943367] md: bind<sdb1> [ 1.945199] md/raid1:md0: not clean -- starting background reconstruction [ 1.945204] md/raid1:md0: active with 2 out of 2 mirrors [ 1.945225] md0: detected capacity change from 0 to 2000396680192 [ 1.945351] md: resync of RAID array md0 [ 1.945357] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 1.945359] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 1.945362] md: using 128k window, over a total of 1953512383k. [ 2.220468] md0: unknown partition table I'm not sure what's up with that detected capacity change, looking at some old logs, this does have appeared earlier as well without a resync right afterwards. In fact, I let it run yesterday until completion and rebooted, and then it wouldn't resync, but today it does resync again. For instance, yesterday I got: [ 1.872123] md: bind<sdc1> [ 1.950946] md: bind<sdb1> [ 1.952782] md/raid1:md0: active with 2 out of 2 mirrors [ 1.952807] md0: detected capacity change from 0 to 2000396680192 [ 1.954598] md0: unknown partition table So it seems to be a problem that the RAID array does not get marked as clean after every shutdown? How can I troubleshoot this? The disks themselves are both fine, SMART tells me no errors, everything ok.

    Read the article

  • How to determine if you should use full or differential backup?

    - by Peter Larsson
    Or ask yourself, "How much of the database has changed since last backup?". Here is a simple script that will tell you how much (in percent) have changed in the database since last backup. -- Prepare staging table for all DBCC outputs DECLARE @Sample TABLE         (             Col1 VARCHAR(MAX) NOT NULL,             Col2 VARCHAR(MAX) NOT NULL,             Col3 VARCHAR(MAX) NOT NULL,             Col4 VARCHAR(MAX) NOT NULL,             Col5 VARCHAR(MAX)         )   -- Some intermediate variables for controlling loop DECLARE @FileNum BIGINT = 1,         @PageNum BIGINT = 6,         @SQL VARCHAR(100),         @Error INT,         @DatabaseName SYSNAME = 'Yoda'   -- Loop all files to the very end WHILE 1 = 1     BEGIN         BEGIN TRY             -- Build the SQL string to execute             SET     @SQL = 'DBCC PAGE(' + QUOTENAME(@DatabaseName) + ', ' + CAST(@FileNum AS VARCHAR(50)) + ', '                             + CAST(@PageNum AS VARCHAR(50)) + ', 3) WITH TABLERESULTS'               -- Insert the DBCC output in the staging table             INSERT  @Sample                     (                         Col1,                         Col2,                         Col3,                         Col4                     )             EXEC    (@SQL)               -- DCM pages exists at an interval             SET    @PageNum += 511232         END TRY           BEGIN CATCH             -- If error and first DCM page does not exist, all files are read             IF @PageNum = 6                 BREAK             ELSE                 -- If no more DCM, increase filenum and start over                 SELECT  @FileNum += 1,                         @PageNum = 6         END CATCH     END   -- Delete all records not related to diff information DELETE FROM    @Sample WHERE   Col1 NOT LIKE 'DIFF%'   -- Split the range UPDATE  @Sample SET     Col5 = PARSENAME(REPLACE(Col3, ' - ', '.'), 1),         Col3 = PARSENAME(REPLACE(Col3, ' - ', '.'), 2)   -- Remove last paranthesis UPDATE  @Sample SET     Col3 = RTRIM(REPLACE(Col3, ')', '')),         Col5 = RTRIM(REPLACE(Col5, ')', ''))   -- Remove initial information about filenum UPDATE  @Sample SET     Col3 = SUBSTRING(Col3, CHARINDEX(':', Col3) + 1, 8000),         Col5 = SUBSTRING(Col5, CHARINDEX(':', Col5) + 1, 8000)   -- Prepare data outtake ;WITH cteSource(Changed, [PageCount]) AS (     SELECT      Changed,                 SUM(COALESCE(ToPage, FromPage) - FromPage + 1) AS [PageCount]     FROM        (                     SELECT CAST(Col3 AS INT) AS FromPage,                             CAST(NULLIF(Col5, '') AS INT) AS ToPage,                             LTRIM(Col4) AS Changed                     FROM    @Sample                 ) AS d     GROUP BY    Changed     WITH ROLLUP ) -- Present the final result SELECT  COALESCE(Changed, 'TOTAL PAGES') AS Changed,         [PageCount],         100.E * [PageCount] / SUM(CASE WHEN Changed IS NULL THEN 0 ELSE [PageCount] END) OVER () AS Percentage FROM    cteSource

    Read the article

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

  • lower-case 'c' key not working in bash

    - by gavin
    This is a bit of a strange one. I'm running Ubuntu 12.04. It's been working well but today, I ran into a hell of strange phenomenon. I can no longer type a lower-case 'c' in bash. At first I thought it was a misconfiguration for the gnome terminal but I tried both a stock xterm and directly at the console (ctrl+alt+F1) and the issue was the same. I can type an upper-case C without any difficulty and I can type lower-case 'c' in any other terminal based program (vim, bash, less, etc.). The lower 'c' also works if I jump into plain old sh. I looked at all the configuration files I know of and haven't found anything incriminating in there. I suspect it's not going to be that simple anyway because if I run bash with the '--norc' option from within sh, the problem remains. I don't know what else to check. In fact, if I wanted to cause this problem on a given machine, I have no idea how it could be done. Total mystery.

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >