Search Results

Search found 12666 results on 507 pages for 'knowledge base'.

Page 191/507 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Cooperator Framework

    - by csharp-source.net
    Cooperator Framework is a base class library for high performance Object Relational Mapping (ORM), and a code generation tool that aids agile application development for Microsoft .Net Framework 2.0/3.0. The main features are: * Use business entities. * Full typed Model (Data Layer and Entities) * Maintain persistence across the layers by passing specific types( .net 2.0/3.0 generics) * Business objects can bind to controls in Windows Forms and Web Forms taking advantage of data binding of Visual Studio 2005. * Supports any Primary Key defined on tables, with no need to modify it or to create a unique field. * Uses stored procedures for data access. * Supports concurrency. * Generates code both for stored procedures and projects in C# or Visual Basic. * Maintains the model in a repository, which can be modified in any stage of the development cycle, regenerating the model on demand.

    Read the article

  • Are there any non-english programming languages? [closed]

    - by samarudge
    Possible Duplicate: Non-English-based programming languages Not sure if this is the right place to ask this, but I'm going to anyway. Without fail, every programming language I've ever seen, used or heard of has it's keywords based around English. if, else, while, for, query, foreach, image, path, extension, the list goes on, are all based around English words. Are there any languages, or ports of languages that base their core keywords based on non-english words to lower the wall for non-english speaking programmers? This is mostly for intrest (English is my first language so it's no problem for me). Are these languages popular locally (I.E. might a software development house in Germany use a programming language based in German over one based in English).

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • 2D XNA Tile Based Lighting. Ideas and Methods

    - by Twitchy
    I am currently working on developing a 2D tile based game, similar to the game 'Terraria'. We have the base tile and chunk engine working and are now looking to implement lighting. Instead of the tile based lighting that terraria uses, I want to implement point lights for torches, etc. I have seen Catalin Zima’s shader based shadows, and this would be perfect for the torches (point lights). My problem here is that the tiles on the surface of the world need to be illuminated, doing this by a big point light is firstly extremely expensive, but also doesn't look right. What I need help with (overall) is... To have a surface that is illuminated regardless of torches, etc. To also have point lights, or smooth tile lighting similar to Catalin Zima’s shader based shadows. Looking forward to your replies. Any ideas are appreciated.

    Read the article

  • LA SPÉCIALISATION POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ

    - by michaela.seika(at)oracle.com
    Software. Hardware. Complete. inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ --     LA SPÉCIALISATION  POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ       Le marché nous demande de plus en plus de solutions et d’engagements. Pour bâtir ces solutions nous nous appuyons sur vous, Partenaires Oracle. En matière d’engagements, Oracle se doit de communiquer auprès du marché quant à la spécialisation de ses partenaires, sur leurs compétences en fonction des projets que les clients nous demandent d’adresser. Plus de 50 spécialisations sont à ce jour disponibles pour les partenaires Gold, Platinum et Diamond : • Sur les produits Technologiques tels que la Base de Données, les options de la Base, la SOA, la Business Intelligence, … • Sur les produits Applicatifs, tels que l’ERP, le CRM, … • Sur les produits Hardware, les Systèmes d’exploitation. Afin de vous aider à vous spécialiser et donc à vous certifier, nos 2 distributeurs à valeur ajoutée, Altimate et Arrow ECS, vous assistent dans cette démarche. ALTIMATE vous propose de participer Lunch & Spécialisation tour Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. ARROW ECS vous propose de participer : L'Ecole de la spécialisation Oracle by Arrow Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. Oracle Solutions Tour Découvrez la solution Oracle lors de ce tour de France. Au programme :  roadmaps, ateliers produits et solutions, certifications     BÉNÉFICES en savoir + • l’engagement d’Oracle aux côtés des partenaires pour adresser les grands dossiers • la visibilité auprès des clients pour être identifié comme Expert sur une offre, reconnu et validé par Oracle • le support (accès support gratuit), Oracle University (vouchers pour certifier gratuitement vos équipes de Consultants Implementation) • les budgets Marketing (lead generation, création de campagnes Marketing, être sponsor d’événements clients)   Différenciez-vous en vous spécialisant sur votre domaine d’expertise et accélérez votre succès ! Oracle et ses Distributeurs à Valeur Ajoutée     Eric Fontaine Directeur Alliances & Channel Technologie Europe du Sud vous présente en vidéo la spécialisation et ses avantages.                                         CONTACTS : ORACLE Jean-Jacques PanissiéOracle Partner Development A&C Technology +33 157 60 28 52 ALTIMATE Sophie Daval +33 1 34 58 47 68 ARROW [email protected] +33 1 49 97 59 63          

    Read the article

  • Oracle ETPM is renamed Oracle Public Sector Revenue Management (PSRM)

    - by Rick Finley
    We are excited to announce to that with the upcoming release of v2.4, we are renaming ETPM to Oracle Public Sector Revenue Management (Oracle PSRM).  This is a pure name change, and all terms and conditions for existing customer licensing remain unchanged.  We feel that this updated naming is a better reflection of our current customer base, which includes tax revenue for many Departments of Revenue, as well as agencies that at manage non-tax revenue, such as regulatory fees, loans, and social benefits.    Please note, as part of this name change, related products in the Oracle ETPM family, such as Oracle Tax Analytics, and Oracle ETPM Self Service, will be renamed at their next major product release to align to the Oracle PSRM theme.   

    Read the article

  • Multiple Denial of Service vulnerabilities in libpng

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-5266 Denial of Service (DoS) vulnerability 4.3 PNG reference library (libpng) Solaris 10 SPARC: 137080-03 X86: 137081-03 Solaris 9 SPARC: 139382-02 114822-06 X86: 139383-02 Solaris 8 SPARC: 114816-04 X86: 114817-04 CVE-2007-5267 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5268 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5269 Denial of Service (DoS) vulnerability 5.0 CVE-2008-1382 Denial of Service (DoS) vulnerability 7.5 CVE-2008-3964 Denial of Service (DoS) vulnerability 4.3 CVE-2009-0040 Denial of Service (DoS) vulnerability 6.8 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Multiple vulnerabilities in Oracle Java Web Console

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-0534 Resource Management Errors vulnerability 5.0 Apache Tomcat Solaris 10 SPARC: 147673-04 X86: 147674-04 CVE-2011-1184 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-2204 Information Exposure vulnerability 1.9 CVE-2011-2526 Improper Input Validation vulnerability 4.4 CVE-2011-2729 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-3190 Permissions, Privileges, and Access Controls vulnerability 7.5 CVE-2011-3375 Information Exposure vulnerability 5.0 CVE-2011-4858 Resource Management Errors vulnerability 5.0 CVE-2011-5062 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-5063 Improper Authentication vulnerability 4.3 CVE-2011-5064 Cryptographic Issues vulnerability 4.3 CVE-2012-0022 Numeric Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • CVE-2014-0591 Buffer Errors vulnerability in Bind

    - by Ritwik Ghoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2014-0591 Buffer Errors vulnerability 2.6 Bind Solaris 10 Patches planned but not yet available Solaris 11.1 11.1.19.6.0 Solaris 8 Patches planned but not yet available Solaris 9 Patches planned but not yet available Please Note: The patches mentioned above will upgrade Bind to 9.6-ESV-R11. The fix for CVE-2014-0591 was initially distributed via 9.6-ESV-R10-P2 as described at our previous blog post. This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • dependency problems now not able to install or remove any package

    - by Manish gour
    I was installing wvdial, now I do not need that, but because of that machine got problem with dependencies, and due to that I am unable to install any package, please help, Below is my stack trace: The following packages have unmet dependencies: libqt3-mt:i386: Depends: libjpeg62 but it is not installed libusb-dev: Depends: libusb-0.1-4 (= 2:0.1.12-14) but 2:0.1.12-20 is installed dependency problems:i386: Depends: libc6 (= 2.4) but 2.15-0ubuntu10.4 is installed Depends: libuniconf4.4 but it is not installed Depends: libwvstreams4.4-base but it is not installed Depends: libwvstreams4.4-extras but it is not installed Depends: libxplc0.3.13 but it is not installed Depends: ppp (= 2.3.0) but it is not installed

    Read the article

  • Workflow with Flash Pro CS6 and FlashDevelop: Using fla and swc to store assets

    - by Arthur Wulf White
    I am using this tutorial: http://www.flashdevelop.org/wikidocs/index.php?title=AS3:FlexAndFlashCS3Workflow In the past older versions of Flash Pro I was able to complete these steps: right-click on the symbol in the Library panel, select "Linkage..." dialog, check "Export for ActionScript" and fill in the symbol name (ie. MySymbol_design or assets.MySymbol_design), do not change the base class (ie. flash.display.MovieClip). Right now, I am stuck at that part. Any hints? What I wish to do is: Use fla for the artist to store assets. Publish to swc Extract the assets in FlashDevelop by creating an instance of their class. ... How is this done in CS6? To clear things up, this is what I see when I right click a Flash symbol:

    Read the article

  • "Il faut repenser les OS pour les processeurs multi-coeurs", d'après Microsoft

    "Il faut repenser les OS pour les processeurs multi-coeurs", d'après Microsoft Dave Probert est expert du noyau chez Microsoft. Selon lui, l'approche actuelle du multi-coeur n'est pas encore à même d'en exploiter toute la puissance, et est trop "compliquée". Aussi, propose-t-il une autre organisation. Car, d'après lui, "la solution" ne se situerait pas dans l'amélioration de techniques "comme le parallel programming, mais plutôt dans le refonte des abstractions de base qui constituent le modèle du système d'exploitation". Il explique qu'on ne tire pas assez parti des performances offertes par les processeurs multicoeurs et qu'aujourd'hui, on ne devrait plus avoir à patienter de...

    Read the article

  • Project planning and customer tracking system

    - by Daniel Hollands
    First off, sorry if this is the wrong 'stack' site, but it seemed like a good place to start. I'm happy to report that my services as a web developer are starting to be in quite a lot of demand, and I have a few existing and potentially new customers all lining up - but I'm finding it very hard to keep track of everything. What I'm hoping for is some (preferably web-based) system which I can use to keep track of who my customers are, the various projects that I've got going on for them, and (if possible) the individual sub-tasks that make up each project. What would be even better is if the relevant customer was able to log into the site, and see the process of their projects. I do hope you know what I'm talking about, and that you'll be able to offer some suggestions of either web-base sites that offer something along these lines, or of some open source solution or something like that? Thank you

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

  • How to integrate game logic in game engines

    - by MahanGM
    Recently I'm working on a 2d game engine example in .Net with C#. My main problem is that I can't figure out how I should include the game logic within the game. Currently I have a base engine which is a set of classes that they are running sub-systems like Render, Sound, Input and Core functionality. There is an editor which helps the user to add resources, build levels, write scripts and other stuffs. I came up with an idea to use Reflection and CSharpCodeProvider from my editor to compile the written code. This way I can get an executable of my product too. This way is quite well but I would like to know what's really the solution and architecture to do this. My engine's role is 2d platform. The scripting language is C# right now because I can't consist any other embeddable language for now. The game needs compilation and CSharpCodeProvider is the only way for me to do it meantime.

    Read the article

  • Why don't computers store decimal numbers as a second whole number?

    - by SomeKittens
    Computers have trouble storing fractional numbers where the denominator is something other than a solution to 2^x. This is because the first digit after the decimal is worth 1/2, the second 1/4 (or 1/(2^1) and 1/(2^2)) etc. Why deal with all sorts of rounding errors when the computer could have just stored the decimal part of the number as another whole number (which is therefore accurate?) The only thing I can think of is dealing with repeating decimals (in base 10), but there could have been an edge solution to that (like we currently have with infinity).

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Sprites as Actors

    - by Scán
    Hello, I'm not experienced in GameDev questions, but as a programmer. In the language Scala, you can have scalable multi-tasking with Actors, very stable, as I hear. You can even habe hundreds of thousands of them running at once without a problem. So I thought, maybe you can use these as a base class for 2D-Sprites, to break out of the game-loop thing that requires to go through all the sprites and move them. They'd basically move themselves, event-driven. Would that make sense for a game? Having it multitasked like that? After all, it will run on the JVM, though that should not be much of a problem nowadays.

    Read the article

  • How do i tell ubuntu to only install is asked for and do not bring other dependencies which will break the whole system?

    - by YumYumYum
    How can i only install python-webkit but not other packages? which is showing to install? (no gstreamer*.*, i do not want to have any single files installed in my distro because of GPL license and it slows my machine a lot) $ sudo apt-get install libwebkitgtk-1.0-0 python-webkit Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libgstreamer-plugins-base0.10-0 libgstreamer0.10-0 Suggested packages: gstreamer-codec-install gnome-codec-install gstreamer0.10-tools gstreamer0.10-plugins-base The following NEW packages will be installed: libgstreamer-plugins-base0.10-0 libgstreamer0.10-0 libwebkitgtk-1.0-0 0 upgraded, 3 newly installed, 0 to remove and 333 not upgraded. Need to get 8,231 kB of archives. After this operation, 28.2 MB of additional disk space will be used. Do you want to continue [Y/n]? n Abort.

    Read the article

  • What's Your Supply Chain+Manufacturing Strategy for Success

    - by [email protected]
    Forward thinking enterprises look to eliminate their dependence on legacy applications that manage information in batch - replacing them with real-time integrated/modern information managment. With rapid manufacturing and global supply chains much more complex today, with the pace of chance ever increasing, leading organizations need better ways to orchestrate their supply chain synchronization with their partner and customer base. EM magazine Mar/Apr'10 edition, covers this topic in an article "Strategising for Success" pgs 26-27, and discusses the available options to organizations as they drive improvements in the levels of collaboration with their partners, suppliers, shippers, distributors and ultimately their end-users, the customer! I'll past the link to the article here as soon as i validate/confirm it.

    Read the article

  • Examples of interesting implementations of character stats?

    - by Tchalvak
    I've got this BBG going ( http://ninjawars.net ), and the character stats currently are simplistic. I'm looking to add a few stats to the current 1/2 (strength and maximum hitpoints, essentially). I've come up with: (strength (unchanged), speed, stamina, and some others that are somewhat interesting wildcard stats). However, I'm not satisfied with how boring the effects of some of these stats are, because they're very linear. Better stat, better effects of the stat, but the stats don't interact with each-other, there's no Rock-Paper-Scissors interaction, having more is always better all the time. So what I'd really like is to see examples of interesting character stats or effects of stats? Examples that I can think of off hand: Call of Cthulu's Insanity stat (things get really weird/chaotic if you start losing sanity) White Wolf stats, to a certain extent (the stats themselves have some basic effects, and all skills effectiveness base themselves off of stats as well) What are some other ways people have used stats to check out?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >