Search Results

Search found 13619 results on 545 pages for 'memory mapped'.

Page 308/545 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • How much C/C++ knowledge is needed for Objective-C/iPhone development?

    - by BFree
    First, a little background. I'm a .Net developer (C#) and have over 5 years experience in both web development and desktop applications. I've been wanting to look into iPhone development for some time now, but for one reason or another always got side tracked. I finally have a potential project on the horizon, and I'm now going full steam ahead learning this stuff. My question is this: I haven't done any C/C++ programming since my schooling days, I've been living in managed land ever since. How much knowledge if any is needed to be successful as an iOS developer? Obviously memory management is something that I'll have to be conscious about (although with iOS 5 there seems to be something called ARC which should make my life easier), but what else? I'm not just talking about the C API (for example, in order to get the sin of a number, I call the sin() function), that's what Google is for. I'm talking about fundamental C/C++ idioms that the average C# developer is unaware of.

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • How to get the Dash and HUD to appear. (and stop Unity spewing error messages.)

    - by Ubuntiac
    I just installed Ubuntu 12.04 on my wifes Dell Inspiron 1501, which uses an R300 ATI graphics chip. Neither the Dash or HUD appear when pushing the appropriate key. When I try unity --reset & in the terminal, I see that over and over it's spitting out: r300: CS space validation failed. (not enough memory?) Skipping rendering. This is just after starting Ubuntu with no apps open, so I find it hard to believe that just rendering the Dash / HUD is completely blowing out the VRAM. Any suggestions on getting this working? /usr/lib/nux/unity_support_test -p shows OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RS480 OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes All sections say "YES"

    Read the article

  • How do you maintain content size vs. content quality in an application?

    - by PeterK
    I am developing my first Cocos2d iPhone/iPad game that includes quite a few sprites, I would need approximately 80 different. As this is for both normal and HD displays I have 2x of each sprite. I am using TexturePacker to optimize the thing. I would like to ask if there are any rules-of-thumb, tricks, ideas etc. to adjust to in regards to size of content, quality and how you maintain high-quality HD-based graphics due to its size vs. the device memory sizes? Also, is it a good idea to only have one copy of the sprites and scale it using code?

    Read the article

  • Looking for an old classic book about Unix command-line tools

    - by Little Bobby Tables
    I am looking for a book about the Unix command-line toolkit (sh, grep, sed, awk, cut, etc.) that I read some time ago. It was an excellent book, but I totally forgot its name. The great thing about this specific book was the running example. It showed how to implement a university bookkeeping system using only text-processing tools. You would find a student by name with grep, update grades with sed, calculate average grades with awk, attach grades to IDs with cut, and so on. If my memory serve, this book had a black cover, and was published circa 1980. Does anyone remember this book? I would appreciate any help in finding it.

    Read the article

  • SS7(M3UA, SCCP, TCAP, MAP) Stack

    - by Ammar Hameed
    I'm building an open source SMSC from scratch; it's almost finished, The SRI and the forwardSM operations are working, but I still have few things to do for the receiving part. I've built the SS7 stack already, but I'm using DB for saving the TCAP transactions IDs to be updated later to get/generate responses. My approach is this: I created memory table (heap table), saved the TCAP TID in the database, then compared the received TCAP TID with the TIDs saved in the database and then decide whether to end the TCAP session or continue. What is the best way to implement it? I'm thinking of doubly linked list that holds the TCAP TID. Am I going towards the right direction, or should I use another technique other than database or D-linked list? Should I leave it as it is, and let the database do the job for saving the TIDs? Please note that I'm using SCTP implementation available on Linux (lsctp) as a transport protocol, the language I'm using is C and the DB is MYSQL.

    Read the article

  • SSC Clinic: Can Implementing "Optimize for Ad Hoc Queries" Boost Performance for the SQLServerCentral.com and Simple-Talk.Com SQL Servers?

    With the introduction of the instance-level option “optimize for ad hoc workloads” in SQL Server 2008, DBAs have a tool to deal with a problem known as plan cache pollution, or plan cache bloat. It’s often caused when one-time use ad hoc queries are sent to SQL Server from Object-Relational Mapping (ORM) solutions, such as LINQ, NHibernate, or Entity Framework. The problem can prevent SQL Server from using its available memory optimally, potentially hurting performance. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • How common are circular references? Would reference-counting GC work just fine?

    - by user9521
    How common are circular references? The less common they are, the fewer hard cases you have if you are writing in a language with only reference counting-GC. Are there any cases where it wouldn't work well to make one of the references a "weak" reference so that reference counting still works? It seems like you should be able to have a language only use reference counting and weak references and have things work just fine most of the time, with the goal of efficiency. You could also have tools to help you detect memory leaks caused by circular references. Thoughts, anyone? It seems that Python uses references counting (I don't know if it uses a tracing collector occasionally or not for sure) and I know that Vala uses reference counting with weak references; I know that it's been done before, but how well would it work?

    Read the article

  • Oracle????????????????~BI/DWH????????(1)

    - by Yusuke.Yamamoto
    Oracle Database ???BI???????????????????????????????????????????????????????????????????????????????????? Oracle Database 11gR2 ??????????????????? ??????????(03:54)????????????? BI????????????(Oracle Database ??????????) Enterprise Manager ??SQL??????????????????????? ?????????????????????????????? BI????????????(Oracle Database ???????????/In-Memory Parallel Execution ???) Enterprise Manager ??SQL???????????????????????? ??????????????Oracle GRID Center ??????????????? ????????????????????????? ????????DWH????????????????·???????? ???? ???????????????!! Oracle Database??????? ??????|??????????? Oracle Enterprise Manager|??????????? DWH(?????????)??·??

    Read the article

  • Which is the best non-java, dynamic, programming language to build attractive GUIs?

    - by VeeKay
    I am well acquainted with java and groovy but somehow I am not intrigued by the performance or looks of swing based applications that are developed on the same. So I want to learn and know about THE best alternate dynamic programming language (coz I am looking for little bit of luxury while writing code by not willing to fiddle with pointers, memory handling, static typing difficulties etc) to develop attractive cross platform GUIs. To be precise, when I say attractive I mean support for elegant translucent windows and nicer components (not the flashy adobe stuff). Can you please suggest me a programming language that manages to fit into this?

    Read the article

  • Animating isometric sprites

    - by Mike
    I'm having trouble coming up with a way to animate these 2D isometric sprites. The sprites are stored like this: < Game Folder Root /Assets/Sprites/< Sprite Name /< Sprite Animation /< Sprite Direction /< Frame Number .png So for example, /Assets/Sprites/Worker/Stand/North-East/01.png Sprite sheets aren't really viable for this type of animation. The example stand animation is 61 frames. 61 frames for all 8 directions alone is huge, but there's more then just a standing animation for each sprite. Creating an sf::Texture for every image and every frame seems like it will take up a lot of memory and be hard to keep track of that many images. Unloading the image and loading the next one every single frame seems like it will do a lot of unnecessary work. What's the best way to handle this?

    Read the article

  • Boot splash broken by "SP5100 TCO timer: mmio address 0xyyyyyyy already in use"

    - by mogliii
    I have ubuntu 11.04 with all the latest updates. I have an ATI HD 4350 graphics card and the "ATI/AMD proprietary FGLRX graphics driver" activated. The reported behaviour does not affect the functionality, its just an optical thing. When I booted up using the desktop CD, the ubuntu boot splash was shown correctly in high resolution. Now after installation with FGLRX the dipsplay is broken (see picture). http://img824.imageshack.us/img824/7269/tcotimer.jpg This is what can be found in dmesg [ 8.621803] SP5100 TCO timer: SP5100 TCO WatchDog Timer Driver v0.01 [ 8.621967] SP5100 TCO timer: mmio address 0xfec000f0 already in use [ 8.622650] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [ 8.622656] Disabling lock debugging due to kernel taint This is what MMIO means: https://en.wikipedia.org/wiki/Memory-mapped_I/O Any idea how to get back the high-res splash?

    Read the article

  • error: you need to load kernel first

    - by Angelos318
    I made a clean install on my Sony Vaio laptop, of Ubuntu 11.10 and when the installation was ready, it prompted to remove the usb I was installing the distro from, and press enter to reboot. After this reboot the first thing I got was the following error: error: couldn't read file error: you need to load the kernel first Press any key to continue.. After that it throws me back to the Grub select screen: Ubuntu, with linux 3.0.0-14-generic-pae recovery mode previous linux versions (none since I made a clean install) memory test If i choose the first option it shows only a black screen and never loads anything. If i reboot the same thing happens. Could I repair this using boot-repair? Is there any other way? Note: I know nothing about linux code so i am a total noob on this one Update: boot-repair did not help Grub.cfg here: http://pastebin.com/GKLuDuhM Boot Info Script: http://pastebin.com/indARkKJ

    Read the article

  • What are the system requirements for each flavor of Ubuntu Desktop?

    - by Braiam
    I'm thinking about installing Ubuntu Desktop, but I don't know what flavor is the better for my system. What are the minimum and recommended hardware requirements? What kind of CPU? How much memory? Should I have Hardware Acceleration? What flavor should I use? This is an attempt of a canonical answer. My answers have the "official minimal requirements", the recommended are a mix of official sources and opinion based (along with the answer it's told the source). You can comment or edit if you feel that the information is obsolete or incomplete. Is a good rule of thumb that any system capable to run Windows Vista, 7, 8 x86 OS X will almost always be a lot faster with any Ubuntu flavor even if they are lower-spec than described below.

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • Subversion BI experience - not a very good one, but working now

    - by Kevin Shyr
    Suffice to say there is now a document in place and I'm the drill sergeant, harassing people to do proper check in, and throw out those who don't.Some people suggest that in a SSIS project, it doesn't really matter if developers don't have the latest version of the project since package check in put the package in the repository, which we can pull out later.  I beg to differ because:When people don't see the package, they might start creating one because their user story require the use of the table.  So they will proceed to create a package and override whatever might already be in the repository.I didn't really see anywhere in the repository to say that which packages were for "deletion".  So I ended up restoring them all, and send the list out to developers.  Then we get into the area where we are relying on people's memory.I'd love to hear other people's experience using Subversion to manage a BI project.

    Read the article

  • Why Java as a First Language?

    - by dsimcha
    Why is Java so popular as a first language to teach beginners? To me it seems like a terrible choice: It's statically typed. Static typing isn't useful unless you care a lot about either performance or scaling to large projects. It requires tons of boilerplate to get the simplest code up and running. Try explaining "Hello, world" to someone who's never programmed before. It only handles the middle levels of abstraction well and is single-paradigm, thus leaving out a lot of important concepts. You can't program at a very low level (pointers, manual memory management) or a very high level, (metaprogramming, macros) in it. In general, Java's biggest strength (i.e. the reason people use it despite the shortcomings of the language per se) is its libraries and tool support, which is probably the least important attribute for a beginner language. In fact, while useful in the real world these may negatives from a pedagogical perspective as they can discourage learning to write code from scratch.

    Read the article

  • Problem with NVIDIA G86 on Kubuntu 12.04

    - by Stefan
    I got problems some weeks ago with my NVIDIA G86 (8500 GT), supposedly due to the infamous 295.40 version of the driver. I got errror messges like NVRM: RmInitAdpterFailed. Tried varous sugestions about setting kernel acpi and memory options, but no luck. I pulled in x-swat and got 302.17, if I remember correctly. It did not help. People recommended xorg-edgers , so I pulled that in and got kernel 3.5.0.12 and nvidia 304.43 but the problem remained. Getting slightly panicked, I tried to back back to vanilla 12.04, so I purged nvidia* and located and removed anything on the system that smelled nvidia. I installed nouveau, cause people said it was great, but as it turns out, my card does not seem to be supported. :-( Sigh... So now I fear that I have atmessed up system, and graphics is terrible. Any help would be appreciated. Xorg.0.log: http://paste.ubuntu.com/1189616/ kern.log: http://paste.ubuntu.com/1189634/

    Read the article

  • Should I tell a departed coworker about their "sev 1" defect?

    - by noahz
    I had a co-worker leave our company recently. Before leaving, he coded a component that had a severe memory leak that caused a production outage (OutOfMemoryError in Java). The problem was essentially a HashMap that grew and never removed entries, and the solution was to replace the HashMap with a cache implementation. From a professional standpoint, I feel that I should let him know about the defect so he can learn from the error. On the other hand, once people leave a company, they often don't want to hear about legacy projects that they have left behind for bigger and better things. What is the general protocol for this sort of situation?

    Read the article

  • Handheld device software - Handheld tuners?

    - by NathanH
    Hey I've been looking around and really don't understand what software some of these company's use for their handheld devices. I've used a lot of Handheld Tuners (Cobb Accessport, Diablo Sport). I've gotten more and more knowledgable of programming and I'm really wanting to understand what software they use on these devices to have a graphical interface. And to hold all the files to flash over to the ecu. I'm just unsure how you would get basically all the components to work together (screen, buttons, memory) without having drivers installed. I could be totally wrong here on using the term drivers, but that's what I would like some help to get more knowledge on (only thing I've really found is making a handheld game boy from scratch,but that was using a emulator.). I've tried looking it up but can't really find a good write up or explanation anywhere I look. Just really would like to put a little device together and have a simple user interface and work from there. Thanks, Nathan

    Read the article

  • Is there a difference between multi-tasking and time-sharing?

    - by Dummy Derp
    Just going over my school notes, my teacher identifies multi-tasking OS, and time-sharing OS as two different things. I really don't see a difference between the two. MULTI-TASKING: You load a number of programs in the memory and execute them. You execute another program if the time quantum allocated to the current program expires OR if it goes on to do I/O and leaves the CPU OR if it finishes execution. TIME-SHARING: the same,again. The same applies in case of serial processing and batch processing. Although they are the same, I guess the only difference would be the way in which control information is passed to the CPU. Maybe, and again MAYBE, in serial processing you need to provide the punch cards with all the processes while in batch, the entire batch uses the same set of control information. Like all the print jobs would have the same control information.

    Read the article

  • All hail the Excel Queen

    - by Tim Dexter
    An excellent question this past week from dear ol Blighty; actually from Brian at Nextgen Clearing Ltd in the big smoke (London). Brian was developing an excel template and wanted to be able to reference the data fields multiple times inside the Excel template. Damn good question and I of course has some wacky solutions, from macros and cell referencing in Excel to pre-processing the data with an XSL stylesheet to copy the data multiple times so it could be referenced multiple times. All completely outlandish, enter our Queen of Excel, Shirley from the development team. Shirley is singlehandedly responsible for the Excel templates, I put her through six months of hell a few years back, with a host of Excel template requirements. She was more than up to the challenge and has developed some great features. One of those, is the ability to use the hidden XDO_METADATA sheet to map the data to custom named fields so they can be used multiple times in the template. So simple and very neat! Excel template and regular Excel users will know that you can only use the naming function once ie the names have to be unique across the workbook so you can not reuse a cell/group name. To get around this you can just come up with as many cell names as you want and map them in the XDO_METADATA sheet to the data columns/fields in your XML data set:. For example: XDO_?DEPTNO_SUMMARY?  <?DEPTNO?> XDO_?DNAME_SUMMARY?  <?DNAME?> XDO_GROUP_?G_D_DETAIL? <xsl:for-each-group select=".//G_D" group-by="./DEPTNO"> XDO_?DEPTNO_DETAIL? <?DEPTNO?> As you can see DEPTNO has been referenced twice and mapped to different named values in the left hand column. These values can then be used to name individual cells in the Excel template. You'll also notice a mix of Publisher <? ...?> and native XSL commands. So the world is your oyster on the mapping and the complexity you might need for calculations or string manipulation. Shirley has kindly built out a sample Excel template, data and result here so you can see how it all hangs together. the XDO_METADATA sheet is hidden, just right click on the sheet names and use the Unhide command to show it.

    Read the article

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • Cannot get Atheros AR9285 to work on 12.10

    - by user100449
    I've already went through all possible advices and still cannot start my Atheros AR9285 wireless card. I have a laptop Toshiba Portege Z830 where the WiFi already worked under Windows 7. But after migration on Ubuntu 12.10. I'm not able get it work. This is what I see on command lshw *-network UNCLAIMED description: Network controller product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:c0500000-c050ffff This is what I see on command rfkill list 0: Toshiba Bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: yes Hard blocked: no Any idea?

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >