Search Results

Search found 19367 results on 775 pages for 'array unique'.

Page 327/775 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Ways to dynamically render a real world 3d environment in Unity3D

    - by Jake M
    Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location. How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this? NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs For example: Suggested methodology 1: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Convert that file to a .3ds file using ??? third party converter(is there a converter that exists?) Import .3ds into Unity3D at runtime as a plane(is this possible)? Suggested methodology 2: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Parse .dae file using my own C# parser I will write(do you think its possible to write a .dae parser that can parse the .dae into an array of Vector3 that describe the height map of that location?) Dynamically create a plane in Unity3D and populate it with my array/list of Vector3 points(is it possible to create a plane this way?) Maybe I am meant to create a mesh instead of a plane? Can you think of any other ways I could render a real world 3d environment in Unity3D?

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Join us on our Journey to be #1 in SaaS!

    - by jessica.ebbelaar(at)oracle.com
    WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring?   Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring? Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. What is the Business Development Group (BDG) The Business Development Group is the key entry point in Oracle for the future Sales and Management talent of the organisation. We are the Demand Generation engine for Oracle in EMEA. We provide revenue generating, quality sales pipeline to our Inside and Field Sales professionals as well as to our Channel Partners. Our current focus is to provide an agile and flexible service offering to our customers and stakeholders to meet ever changing business needs, whilst constantly striving to improve the customer experience, quality of our pipeline, market coverage and penetration. As a SaaS Business Development Consultant (BDC) you will be the first touch point with new customers. Your goal is to proactively identify and qualify business opportunities leading to revenue for Oracle. You will work closely with your Inside Sales colleagues who will progress your qualified pipeline and opportunities. Work for us Work for the only multi-pillar SaaS vendor in the market Be part of a FUN, fast paced and truly International sales team  Develop you solution sales EXPERTISE Drive your CAREER development within a structured and supportive environment The Profile You have a passion for selling cutting-edge technology You thrive in a fast paced and dynamic work environment where being the best is paramount Your priority is always the customer You live for a challenge and you love to win Join us on our Journey to be #1 in SaaS and be part of our Cloud Success Story! You will find more information about open roles here

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • How to Rotate different data in days of the week in php [migrated]

    - by shihon
    I am working on a project in which i have to distribute different ad's per day, the ad's in form of array are: $ad = array( 'attribute1_value' => "12", 'attribute2_value' => "xyz", 'attribute3_value' => 'http://example.com', 'attribute4_value' => 'data'); The logic i am using with switch case : $day = date('w',time()); switch ($day) { case '0': if($day == '0') { $count = 0; echo $ad; $count++; } else { $count = 7; echo $ad; } break; case '1': if($day == '1') { $count = 1; echo $ad; $count++; } else { $count = 8; echo $ad; } break; Problem is if i have ~15 ad's then i want to distribute ad/day, date('w') output's the present day but after day 7 i.e saturday, on sunday ad number 8 initiate. I have to implement this scenario using date function. Also i have to send ad's to those user who are not experience this ad before. I am not expert in php, as a beginner working in php/mysql. Kindly help me to improve this concept

    Read the article

  • Silverstripe: How can I disable comments?

    - by SamIAm
    My client site is built in Silverstripe, there is a news page, and it allows people to leave comments. Unfortunately we've got loads of spam emails. I'm new to this, is there any way we can disable the comment field by default? How do I do it? Alternatively is there easy way for me to install a spam protection? Update - Because this is someone else's code, I just realised that they have some sort of spam protection already, so we are trying to disable comments now. I have manage to set no comment as default by changing file BlogEntry.php static $defaults = array( "ProvideComments" => true, 'ShowInMenus' => false ); to static $defaults = array( "ProvideComments" => false, //changed 'ShowInMenus' => false ); Am I on the right track to disable comments by default? Also how can I stop on the news page showing xxx comments link? eg Test Posted by Admin on 21 June 2011 | 3 Comments Tags: P This is a test.... 3 comments | Read the full post

    Read the article

  • Best RAID setup for multimedia fileserver?

    - by Mr. Schwabe
    I'm building a fileserver for my small office. We do film and multimedia design. Only 3 clients connected. The server is primarily for local access to graphic assets and video files. I'm looking for advice on hardware and software required. Particularly for the RAID. I have the following objectives: A) merged capacity I'd like all other systems to access the data as a single mapped network drive that has an initial capacity of 10 TB. So perhaps 5x 2TB drives (plus mirror drives for redundancy). B) easy way to increase capacity Thinking long term, I'd like to 'easily' add more drives to the array for a potential two or three fold increase in capacity. So theoretically it could get upto a 30 TB raid array consisting of maybe 15x 2 TB drives of capacity (plus mirror drives for redundancy). C) maximum fault tolerance I want at least 1 mirror drive per capacity drive (in laymen's terms). So if I start with 10 TB / 5x 2TB of capacity, I suppose I would need another another 5x 2TB drives to be mirrors. So 10 drives total. But I'd also like potential for even more redundancy; with upto 2 additional mirrors per 'capacity drive' (and to be able to add them to the array anytime with ease). D) easy way to monitor drive health I'd like an intuitive interface for managing the raid and monitoring drive health The other systems accessing this network drive will be running Windows, but also the odd Ubuntu and MacOS system as well. Are these objectives attainable? What type of RAID setup do you recommend? What hardware will be required? Also what OS do you think this system should be running? Does it really matter? I'm no network admin - just a long time Windoze user, without much Linux experience. That said, I'm not opposed to a Linux solution if it's easy enough and more practical than a Windows OS for this server. Or maybe something such as Openfiler. Budget should hit the sweet spot for value and performance (hence my preference to use 2TB drives). The biggest focus is storage; aside from that the system just needs to keep the drives running optimally with perhaps 2 or 3 clients accessing / writing files at any given time. The hardware quote would start with something like 10x 2TB WD Caviar Blacks; about $1900 for the storage + $x for remaining parts. http://ncix.com/products/index.php?sku=42775&vpn=WD2001FASS&manufacture=Western%20Digital%20WD Your advice is appreciated, thanks!

    Read the article

  • Battery backed write cache behavior upon disk change

    - by Halfgaar
    We use 3ware Inc 9650SE SATA-II RAID PCIe RAID controllers with battery backed write cache. Our spare hardware has the same controller. I was wondering; are these controllers smart enough not to sync the cache when the disks have been changed? For example, if I deploy one of those spare machines by putting in the disks of another machine and that spare machine still has pending writes, will it be smart enough not to perform those writes on the replaced array? Edit: my scenario is not really made clear, so let me give an example: server1 goes down because of power supply failure. I put the disks in server2 and start. I repair server1 I put the disks back from server2 in server1 (it's not relevant right now that in reality I would probably keep server2 running). If server1 doesn't have safeguards, it will write to the array, thinking it's simply powering up again, corrupting it.

    Read the article

  • Why can't add a hot spare in freebsd? Can anybody help me fix it?

    - by hamlet
    Why can't add a hot spare? Can anybody help me fix it? mfiutil add e1:s1 mfid0 mfiutil: Drive 1 is not available My mfi status:: mfiutil show config mfi0 Configuration: 1 arrays, 1 volumes, 0 spares array 0 of 2 drives: drive 0 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWHSB4C> SAS enclosure 1, slot 0 drive 1 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWJ3AEC> SAS enclosure 1, slot 1 volume mfid0 (136G) RAID-1 64K OPTIMAL spans: array 0 mfiutil show events 1468 (boot + 25s/BATTERY/WARN) - Battery removed 1475 (boot + 52s/DRIVE/WARN) - PD 00(e1/s0) is not a certified drive 1478 (boot + 52s/DRIVE/WARN) - PD 01(e1/s1) is not a certified drive 1480 (boot + 64s/BATTERY/WARN) - BBU disabled; changing WB virtual disks to WT mfiutil show volumes mfi0 Volumes: Id Size Level Stripe State Cache Name mfid0 ( 136G) RAID-1 64K OPTIMAL Disabled

    Read the article

  • launchctl - use rvm instead of system Ruby in executed scripts?

    - by Stefan Kendall
    I have a launchctl job I define as such: <key>ProgramArguments</key> <array> <string>/bin/sh</string> <string>-c</string> <string>~/projects/script.sh</string> </array> When I run script.sh manually, the script works fine, as it uses the currently configured rvm version of ruby. When I run this through launchctl, the system version of Ruby is used, which breaks the script. How can I get this script to run with the right version of ruby available?

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • mdadm: brakes boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Script language native extensions - avoiding name collisions and cluttering others' namespace

    - by H2CO3
    I have developed a small scripting language and I've just started writing the very first native library bindings. This is practically the first time I'm writing a native extension to a script language, so I've run into a conceptual issue. I'd like to write glue code for popular libraries so that they can be used from this language, and because of the design of the engine I've written, this is achieved using an array of C structs describing the function name visible by the virtual machine, along with a function pointer. Thus, a native binding is really just a global array variable, and now I must obviously give it a (preferably good) name. In C, it's idiomatic to put one's own functions in a "namespace" by prepending a custom prefix to function names, as in myscript_parse_source() or myscript_run_bytecode(). The custom name shall ideally describe the name of the library which it is part of. Here arises the confusion. Let's say I'm writing a binding for libcURL. In this case, it seems reasonable to call my extension library curl_myscript_binding, like this: MYSCRIPT_API const MyScriptExtFunc curl_myscript_lib[10]; But now this collides with the curl namespace. (I have even thought about calling it curlmyscript_lib but unfortunately, libcURL does not exclusively use the curl_ prefix -- the public APIs contain macros like CURLCODE_* and CURLOPT_*, so I assume this would clutter the namespace as well.) Another option would be to declare it as myscript_curl_lib, but that's good only as long as I'm the only one who writes bindings (since I know what I am doing with my namespace). As soon as other contributors start to add their own native bindings, they now clutter the myscript namespace. (I've done some research, and it seems that for example the Perl cURL binding follows this pattern. Not sure what I should think about that...) So how do you suggest I name my variables? Are there any general guidelines that should be followed?

    Read the article

  • Teaching OO to VBA developers [closed]

    - by Eugene
    I work with several developers that come from less object oriented background like (VB6, VBA) and are mostly self-taught. As part of moving away from those technologies we recently we started having weekly workshops to go over the features of C#.NET and OO practices and design principles. After a couple of weeks of basic introduction I noticed that they had a lot of problems implementing even basic code. For instance it took probably 15 minutes to implement a Stack.push() and a full hour to implement a simple Stack fully. These developers were trying to do things like passing top index as a parameter to the method, not creating an private array, using variables out of scope. But most of all not going through the "design (dia/mono)log" (I need something to do X, so maybe I'll make an array, or put it here). I am a little confused because they are smart people and are able to produce functional code in their traditional environments. I'm curious if anybody else has encountered a similar thing and if there are any particular resources, exercises, books, ideas that would be helpful in this circumstance.

    Read the article

  • Booting Windows7 kernel from an initrd/wim image file

    - by Ivo
    I'm wondering if it's possibile to have Win7 kernel and relative drivers (especially storage drivers) to boot from an initrd-like image file (maybe .wim?) and later then mount the windows root partition and complete the load of the full OS? I'll try to explain why: I'm running an emulated environment with NO REAL BIOS, and I'm passingthrough a raid storage controller. I want windows to boot from this controller array, but of course the BCD manager cannot access disks in the array until kernel and relative controller storage drivers are loaded. To be clear I get the classical winload.exe missing error. I need a similar solution to what Linux does, loading the kernel and his drivers, and later then mount the root partition and complete the boot. Any ideas or advices?

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • Storage servers architectural solution for backup. What is the best way? (pics inside)

    - by Kirzilla
    Hello, What is the best architecture for storage servers array? Needs... a) easy way to add one more server to array b) we don't have single backup server c) we need to have one backup for each "web" part of each server Group #1 : is cross-server-backuping scheme; the main disadvantage that we can't add one more server, we should add 2 servers in one time. Group #2 : is a Group #1, but with three and more servers. It also have a disadvantage - to add one more server we should move existing backup to it. Any suggestions? Thank you. Thank you.

    Read the article

  • Index Check and Correct Character Display in a Console Hangman Game for Java

    - by Jen
    I have this problem wherein, I can not display the correct characters given by the character. Here's what I meant: String words, in; String replaced_words; Scanner s = new Scanner (System.in); System.out.println("Enter a line of words basing on an event, verse, place or a name of a person."); words = s.nextLine(); System.out.println("The words you just placed are now accepted."); //using char array method, we tried to place the words into a characters array. char [] c = words.toCharArray(); // we need to replace the replaced_words = words.replace(' ', '_').replaceAll("[^\\-]", "-"); for (int i = 0; i < replaced_words.length(); i++) { System.out.print(replaced_words.charAt(i) + " "); } System.out.println("Now, please input a character, guessing the words you just placed."); in = s.nextLine(); in that code, want that the user, when types a word (or should it be character?), any of the correct character the user inputs will be displayed, and changes the hyphen to it...(more like the hangman series of games). How can I achieve this?

    Read the article

  • Design Pattern for building a Budget

    - by Scott
    So I've looked at the Builder Pattern, Abstract Interfaces, other design patterns, etc. - and I think I'm over thinking the simplicity behind what I'm trying to do, so I'm asking you guys for some help with either recommending a design pattern I should use, or an architecture style I'm not familiar with that fits my task. So I have one model that represents a Budget in my code. At a high level, it looks like this: public class Budget { public int Id { get; set; } public List<MonthlySummary> Months { get; set; } public float SavingsPriority { get; set; } public float DebtPriority { get; set; } public List<Savings> SavingsCollection { get; set; } public UserProjectionParameters UserProjectionParameters { get; set; } public List<Debt> DebtCollection { get; set; } public string Name { get; set; } public List<Expense> Expenses { get; set; } public List<Income> IncomeCollection { get; set; } public bool AutoSave { get; set; } public decimal AutoSaveAmount { get; set; } public FundType AutoSaveType { get; set; } public decimal TotalExcess { get; set; } public decimal AccountMinimum { get; set; } } To go into more detail about some of the properties here shouldn't be necessary, but if you have any questions about those I will fill more out for you guys. Now, I'm trying to create code that builds one of these things based on a set of BudgetBuildParameters that the user will create and supply. There are going to be multiple types of these parameters. For example, on the sites homepage, there will be an example section where you can quickly see what your numbers look like, so they would be a much simpler set of SampleBudgetBuildParameters then say after a user registers and wants to create a fully filled out Budget using much more information in the DebtBudgetBuildParameters. Now a lot of these builds are going to be using similar code for certain tasks, but might want to also check the status of a users DebtCollection when formulating a monthly spending report, where as a Budget that only focuses on savings might not want to. I'd like to reduce code duplication (obviously) as much as possible, but in my head, every way I can think to do this would require using a base BudgetBuilderFactory to return the correct builder to the caller, and then creating say a SimpleBudgetBuilder that inherits from a BudgetBuilder, and put all duplicate code in the BudgetBuilder, and let the SimpleBudgetBuilder handle it's own cases. Problem is, a lot of the unique cases are unique to 2/4 builders, so there will be duplicate code somewhere in there obviously if I did that. Can anyone think of a better way to either explain a solution to this that may or may not be similar to mine, or a completely different pattern or way of thinking here? I really appreciate it.

    Read the article

  • canvas tile grid, hover effects, single tilesheet, etc

    - by user121730
    I'm currently in the process of building both the client and server side of an html5, canvas, and WebSocket game. This is what I have thus far for the client: http://jsfiddle.net/dDmTf/7/ Current obstacles The hover effect has no idea what to put back after the mouse leaves. Currently it's just drawing a "void" tile, but I can't figure out how to redraw a single tile without redrawing the whole map. How would I go about storing multiple layers within the map variable? I was considering just using a multi-dimensional array for each layer (similar to what you see as the current array), and just iterating through it, but is that really an efficient way of doing it? Side note The tile sheet being used for the jsfiddle display is only for development. I'll be replacing it as things progress in the engine. I hope you can help! Hopefully you guys can help me, I've been struggling to get through things, since I'm learning how things kind of stuff works as I go. If you guys have any pointers for my JavaScript, feel free. As I'm more or less learning advanced usage as I go, I'm sure I'm doing plenty of things wrong. Note: I will continue to update this post as the engine improves, but updating the jsfiddle link and updating the obstacles list by striking things that have been solved, or adding additions. Thanks!

    Read the article

  • Do objects maintain identity under all non-cloning conditions in PHP?

    - by Buttle Butkus
    PHP 5.5 I'm doing a bunch of passing around of objects with the assumption that they will all maintain their identities - that any changes made to their states from inside other objects' methods will continue to hold true afterwards. Am I assuming correctly? I will give my basic structure here. class builder { protected $foo_ids = array(); // set in construct protected $foo_collection; protected $bar_ids = array(); // set in construct protected $bar_collection; protected function initFoos() { $this->foo_collection = new FooCollection(); foreach($this->food_ids as $id) { $this->foo_collection->addFoo(new foo($id)); } } protected function initBars() { // same idea as initFoos } protected function wireFoosAndBars(fooCollection $foos, barCollection $bars) { // arguments are passed in using $this->foo_collection and $this->bar_collection foreach($foos as $foo_obj) { // (foo_collection implements IteratorAggregate) $bar_ids = $foo_obj->getAssociatedBarIds(); if(!empty($bar_ids) ) { $bar_collection = new barCollection(); // sub-collection to be a component of each foo foreach($bar_ids as $bar_id) { $bar_collection->addBar(new bar($bar_id)); } $foo_obj->addBarCollection($bar_collection); // now each foo_obj has a collection of bar objects, each of which is also in the main collection. Are they the same objects? } } } } What has me worried is that foreach supposedly works on a copy of its arrays. I want all the $foo and $bar objects to maintain their identities no matter which $collection object they become of a part of. Does that make sense?

    Read the article

  • Reshape linux md raid5 that is already being reshaped?

    - by smammy
    I just converted my RAID1 array to a RAID5 array and added a third disk to it. I'd like to add a fourth disk without waiting fourteen hours for the first reshape to complete. I just did this: mdadm /dev/md0 --add /dev/sdf1 mdadm --grow /dev/md0 --raid-devices=3 --backup-file=/root/md0_n3.bak The entry in /proc/mdstat looks like this: md0 : active raid5 sdf1[2] sda1[0] sdb1[1] 976759936 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU] [>....................] reshape = 1.8% (18162944/976759936) finish=834.3min speed=19132K/sec Now I'd like to do this: mdadm /dev/md0 --add /dev/sdd1 mdadm --grow /dev/md0 --raid-devices=4 --backup-file=/root/md8_n4.bak Is this safe, or do I have to wait for the first reshape operation to complete? P.S.: I know I should have added both disks first, and then reshaped from 2 to 4 devices, but it's a little late for that.

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >