Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 220/677 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Problems in exporting terrain from autodesk 3ds

    - by Jatin Kumar
    i am trying to make small counter strike sort of game and for the terrain part i have exported the terrain in 3ds format from Autodesk 3ds-max and imported the same in opengl using lib3ds. Its working fine but with few problems: The terrain is mainly made up of some cubical boxes with texture on them and placed on a big flat surface with boundary wall. In opengl i have enabled anti aliasing but still there is too much aliasing on the boundaries (visible when rotating the camera). I have tiled the floor with some image but in opengl it is just the single image stretched over the complete surface. I have exported animated model (Skelton+mesh+material+animation) from 3ds and used cal3d library for reading the same. Model has a gun also which is not appearing in opengl and it too has too much of aliasing problem. I have googled around but couldn't find any relevant solutions. Thanks in advance

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • IIS throws HTTP Error 503. The service is unavailable after installation of windows 8

    - by Floran
    I was using IIS7.5 on Windows 7. Everything worked great. I then installed Windows 8. The shortcut link to my IIS7.5 stopped working, because all this was apparently moved to a windows.old folder. I had to install IIS8 via the 'turn Windows features on/off'. After I did that, I saw my sites again and everything looked ok, until I browsed to a site and got the error: Service Unavailable HTTP Error 503. The service is unavailable. I restarted IIS, restarted my PC, restarted the single site, but the problem remains. I also tried looking at the event logs. My eventlogs used to be in C:\inetpub\logs\LogFiles\W3SVC4, but I don't see any log of today in there. Maybe that has changed with the new IIS as well? If so, where are the eventlogs now or where can I see where they should be saved?

    Read the article

  • Mouse pointer size problem

    - by Rasmus Pedersen
    My mouse cursor is double the normal size. Its only the default pointer that is enlarged. Variations like resize, busy and so on are the correct size. The problem persists even when I change cursor theme. If I move the cursor inside a Firefox window it changes to the correct size. My resolution is 2560x1440, its a single screen setup. Nvidia-settings reports my DPI to be: 108x107. I've tired to force that DPI in the LightDM conf, since I figured it must have something to-do with the DPI calculation. I have tried to change the cursor size through dconf but the problem still remains. I haven't seen this problem before, it arrived after the upgrade from Beta 2 to release version of Ubuntu 11.10. Anybody got any idea what the problem might be, its pretty annoying with the huge cursor.

    Read the article

  • How to import in BIDS more than one SSIS package in one shot!

    - by Luca Zavarella
    Have you ever wanted to add more than one Integration Services existing package (e.g. 20 packages) in a SSIS project? Well, you may suppose that an Open Dialog supports multiple files selection to import more than one file at a time ... BIDS Open Dialog doesn’t allow this, you can just select a single file! Hence the loss of valuable time spent to import the packages one at a time. Few days ago I learned a trick that solves the problem, thanks to this post by Matt Masson. Just copy all the packages to import from Windows Explorer (Ctrl + C): Then just right click on the SSIS Packages folder of the Integration Services project and make a simple Past (CTRL + V): So “auto-magically” you’ll have all those packages imported in your Integration Services project!! What can I say... this feature was well hidden!

    Read the article

  • Keystore and Credential Store interplay in OWSM - 11g

    - by Prakash Yamuna
    One of the most common problems faced by customer's is the use of the keystore and it's interplay with the credential store.Here is a picture that describes these relationships.(Click on the picture for a larger image). The picture makes some assumptions in describing the relationship. Some of assumptions are: a) the key used for signing and encryption are the same. b) A keystore can have multiple keys and each key can have it's own alias. In the picture I show only a single key with alias "orakey". c) The keystore being described here is a JKS keystore. Things can vary slightly for other type of keystores. I hope to have a detailed How To that provides the larger picture and then shows these relationships in that context and this picture was created in the context of that How-To. However I think people will find this picture useful on a standalone basis as well. The <serviceInstance> is the entry you will find in jps-config.xml

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • using egrep to find missing @ in log

    - by jols
    I am using the following command to find log entries that are the result of a log in to the email server: egrep '_login[^ ]' /var/log/exim_mainlog That works fine to find entries that contain content like this: P=esmtpa A=courier_login:[email protected] S=1573 id=f1cd08396,... But what I need to do is to change my grep statement, so that it finds single word logins that do not use the @ sign, like so: P=esmtpa A=courier_login:name S=1573 id=f1cd08396,... Where the log in before was "[email protected]", but in the second log entry, the log in used was only "name". Is this possible using grep or egrep, perhaps in some kind of a compound statement? Thanks much.

    Read the article

  • Flattening a Jagged Array with LINQ

    - by PSteele
    Today I had to flatten a jagged array.  In my case, it was a string[][] and I needed to make sure every single string contained in that jagged array was set to something (non-null and non-empty).  LINQ made the flattening very easy.  In fact, I ended up making a generic version that I could use to flatten any type of jagged array (assuming it's a T[][]): private static IEnumerable<T> Flatten<T>(IEnumerable<T[]> data) { return from r in data from c in r select c; } Then, checking to make sure the data was valid, was easy: var flattened = Flatten(data); bool isValid = !flattened.Any(s => String.IsNullOrEmpty(s)); You could even use method grouping and reduce the validation to: bool isValid = !flattened.Any(String.IsNullOrEmpty); Technorati Tags: .NET,LINQ,Jagged Array

    Read the article

  • Serverside memory efficiency and threading for a turn based game

    - by SkeletorFromEterenia
    Im programming on a turn based war-game for some years now (along with the engine) and Im having quite a hard time at figuring out what the games server architecture should look like, since most game server architecture articles I found focus either on FPS oder MMOGs, which doesn't really fit since I want many matches with 1- 16 players on my server, with each match being played in turn based mode. My chief concern is memory usage, since the most basic approach of loading every game that is being played completely into RAM should be quite inefficient, so is there a suitable strategy for selecting only the needed bits and loading them? Another question I got is how to design the threading on the server, since I think using only a single thread could be a problem due to the fact that the game or part of it might have to be loaded from the database. I would be very happy if you could share your knowledge or point me to material on this topic.

    Read the article

  • How to Back Up & Restore Your Installed Ubuntu Packages With APTonCD

    - by Chris Hoffman
    APTonCD is an easy way to back up your installed packages to a disc or ISO image. You can quickly restore the packages on another Ubuntu system without downloading anything. After using APTonCD, you can install the backed up packages with a single action, add the packages as a software source, or restore them to your APT cache. How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • Is Perforce as good as merging as DVCSs?

    - by dukeofgaming
    I've heard that Perforce is very good at merging, I'm guessing this has to do with that it tracks changes in the form of changelists where you can add differences across several files in a single blow. I think this implies Perforce gathers more metadata and therefore has more information to do smarter merging (at least smarter than Subversion, being Perforce centralized). Since this is similar to how Mercurial and Git handle changes (I know DVCSs track content rather than files), I was wondering if somebody knew what were the subtle differences that makes Perforce better or worse than a DVCS like Mercurial or Git.

    Read the article

  • How do I play HD video without stuttering?

    - by hugocreal
    Hello All, I want play a Blu Ray Video from my hard drive with Boxee, but it chokes all the time, i try play with others video players but is the same. I think that is a 10gb .mkv file with 10Gb. Stuttering video with VLC , mplayer, and the default video player on ubuntu... I read in many Forums just can´t put this to work. Any idea? thanks. Ubuntu 10.10, My PC specs: Single Core 2Ghz ATI HD 4350 (i have installed the drivers from "Hardware Drivers"), 2G Memory

    Read the article

  • What are some easy techniques to scan books for new information?

    - by aditya menon
    I find it irresistible to keep purchasing cheap programming and technical e-books in fields such as Drupal, PHP, etc., and also compulsively download free material made available such as those from Microsoft's developer blog... The main problem with the large library I've developed is that there are many chapters (especially the first few) in these books packed with information I already know, but with helpful tidbits hidden in between. The logical step would be to skip those chapters and read the ones I don't seem to know anything about, but I'm afraid I may lose out on really important information this way. But naturally it is tedious to have to read about variables, functions and objects all over again when you are trying to know more about the Registry pattern, for example. It's hard to research on the net for this, because my question itself seems vague and difficult to formulate into a single search query. I need people-advice - what do you do in this situation?

    Read the article

  • Are there any tools for testing drag & drop Windows desktop applications?

    - by Andrew
    I need to develop a Windows desktop application (win32 API) which will use drag & drop extensively in many formats, including my own. I need to test it, for example, with CF_TEXT dragging, CF_RTF, CF_DIB, CF_METAFILEPICT, and many others. The tool needs to have the following features: Displaying the content of DataObject dragged into it with all available format viewers. Allows preparation of a few samples of different clipboard formats together in a single DataObject, ready for dragging into my app. Allows including my own format names into the formats list of the testing tool.

    Read the article

  • Windows Azure: Server and Cloud Division

    - by kaleidoscope
    On 8th Dec 2009 Microsoft announced the formation of a new organization within the Server & Tools Business that combines the Windows Server & Solutions group and the Windows Azure group, into a single organization called the Server & Cloud Division (SCD). SCD will deliver solutions that help our customers realize even greater benefits from Microsoft’s investments in on-premises and cloud technologies.  And the new division will help strengthen an already solid and extensive partner ecosystem. Together, Windows Server, Windows Azure, SQL Server, SQL Azure, Visual Studio and System Center help customers extend existing investments to include a future that will combine both on-premises and cloud solutions, and SCD is now a key player in that effort. http://blogs.technet.com/windowsserver/archive/2009/12/08/windows-server-and-windows-azure-come-together-in-a-new-stb-organization-the-server-cloud-division.aspx   Tinu, O

    Read the article

  • Effective template system

    - by Alex
    I'm building a content management system, and need advice on which theming structure should I adopt. A few options (This is not a complete list): Wordpress style: the controller decides what template to load based on the user request, like: home page / article archive / single article page etc. each of these templates are unrelated to other templates, and must exist within the theme the theme developer decides if (s)he want to use inner-templates (like "sidebar", "sidebar item"), and includes them manually where (s)he thinks are needed. Drupal style: the controller gives control to the theme developer only to inner-templates; if they don't exist it falls back internally to some default templates (I find this very restrictive) Funky style: the controller only loads a "index.php" template and provides the theme developer conditional tags, which he can use to include inner-templates if (s)he wants. Among these styles, or others what style of template system allows for fast development and a more concise design and implementation.

    Read the article

  • Overloading interface buttons, what are the best practices?

    - by XMLforDummies
    Imagine you'll have always a button labeled "Continue" in the same position in your app's GUI. Would you rather make a single button instance that takes different actions depending on the current state? private State currentState = State.Step1; private ContinueButton_Click() { switch(currentState) { case State.Step1: DoThis(); currentState = State.Step2; break; case State.Step2: DoThat(); break; } } Or would you rather have something like this? public Form() { this.ContinueStep2Button.Visible = false; } private ContinueStep1Button_Click() { DoThis(); this.ContinueStep1Button.Visible = false; this.ContinueStep2Button.Visible = true; } private ContinueStep2Button_Click() { DoThat(); }

    Read the article

  • Is true multithreading really necessary?

    - by Jonathan Graef
    So yeah, I'm creating a programming language. And the language allows multiple threads. But, all threads are synchronized with a global interpreter lock, which means only one thread is allowed to execute at a time. The only way to get the threads to switch off is to explicitly tell the current thread to wait, which allows another thread to execute. Parallel processing is of course possible by spawning multiple processes, but the variables and objects in one process cannot be accessed from another. However the language does have a fairly efficient IPC interface for communicating between processes. My question is: Would there ever be a reason to have multiple, unsynchronized threads within a single process (thus circumventing the GIL)? Why not just put thread.wait() statements in key positions in the program logic (presuming thread.wait() isn't a CPU hog, of course)? I understand that certain other languages that use a GIL have processor scheduling issues (cough Python), but they have all been resolved.

    Read the article

  • What are the appropriate mount options for a shared NTFS partition on an SSD in a dual boot Ubuntu/Windows setup?

    - by Andreas Jonsson
    I have Ubuntu 13.10 and Windows 7 installed in dual boot on a single SSD. In addition they share an NTFS partition where I put all my data and documents. What is the optimal way to mount this NTFS partition in /etc/fstab (considering performance and minimizing wear of the SSD)? Similar questions have been asked, but I could not find answers to this particular scenario. As I understand it, the mount option 'discard' is not supported for NTFS and so should not be used (although it is recommended here). Another often quoted mount option is 'noatime'. I use it for my ext4 partitions. Does it apply to NTFS? My current /etc/fstab line is: UUID=XXXXXXXXXXXXXXXX /dos ntfs defaults,nls=utf8,uid=1000,gid=1000 0 0

    Read the article

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • Android: debug certificate expired error

    - by Bill Osuch
    I started up Eclipse today, created a new project, and immediately had an error before I had changed a single line: Error generating final archive: Debug Certificate expired on 11/12/11 When installed, the Android SDK generates a "debug" signing certificate for you in a file called "debug.keystore". Eclipse uses this certificate rather than forcing you to create a new one for every project. In older versions of Eclipse, the certificate was only valid for 365 days, but as I understand it the default has been changed to 30 years in newer versions. If for whatever reason you don't want to upgrade Eclipse, you can manually delete the certificate to for Eclipse to generate a new one. You can find the location in Preferences -> Android -> Build -> Default debug keystore (mine was in C:\Users\myUserName\.android\); just delete the "debug.keystore" file, then go back into Eclipse and Clean the project to generate a new file.

    Read the article

  • Building a distributed system on Amazon Web Services

    - by Songo
    Would simply using AWS to build an application make this application a distributed system? For example if someone uses RDS for the database server, EC2 for the application itself and S3 for hosting user uploaded media, does that make it a distributed system? If not, then what should it be called and what is this application lacking for it to be distributed? Update Here is my take on the application to clarify my approach to building the system: The application I'm building is a social game for Facebook. I developed the application locally on a LAMP stack using Symfony2. For production I used an a single EC2 Micro instance for hosting the app itself, RDS for hosting my database, S3 for the user uploaded files and CloudFront for hosting static content. I know this may sound like a naive approach, so don't be shy to express your ideas.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >