Search Results

Search found 31884 results on 1276 pages for 'microsoft sync framework'.

Page 494/1276 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • computer shows up twice, connection unknown

    - by Thomas G. Seroogy
    added two computers to Ubuntu One. One machine is windows. The windows version seems to work fine, and I started syncing a folder by placing it into the Ubuntu One folder. All files and folders are visible when I go to my account online. On Ubuntu machine. I selected to sync the Download folder. Shortly thereafter, I realized that one folder exceeded my storage max. I tried to un-sync the folder, but Ubuntu One and the Stop Syncing This Folder were not visible in the menu. Per Ubuntu instructions, I removed my Ubuntu computer from the list of syncing computers. Per Ubuntu instructions, I re-added the Ubuntu computer. However, I find that two computers by the same name are added on both the desktop app and on the web. Plus, I the connection is "unknown." I have removed and re-added the computer several times with the same results. In all cases, I remove the computers using the Ubuntu One desktop app, then removing them from my account on the web, removing the Ubuntu One password, and restart the Ubuntu One app. Problem remains. Thanks in advance for any replies and help.

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • Ubuntu 13.04 client cannot connect to Raspbian samba share

    - by envoyweb
    I have a client Ubuntu 13.04 machine trying to connect to a server running Raspbian with samba and samba-common-bin installed on the server I can see my share and when I try to login I get this error: Unable to access location: Failed to write windows share Cannot allocate memory. I have installed ntfs-3g for the usb hard drive that already auto mounts on the server so I never had to create a directory or edit fstab. Testparm on the server states the following: [global] workgroup = ENVOYWEB server string = %h server map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [homes] comment = Home Directories valid users = %S create mask = 0700 directory mask = 0700 browseable = No [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [BigDude] comment = Sharing BigDude's Files path = /media/BigDude/ valid users = @users read only = No create mask = 0755 testparm on the client which is running ubuntu is as follows [global] workgroup = ENVOYWEB server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers

    Read the article

  • How to re-add RAID-10 dropped drive?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • Spreading incoming batched data into a real-time stream

    - by pr1001
    I would like to display some events in 'real-time'. However, I must fetch the data from another source. I can request the last X minutes, though the source is updated approximately every 5 minutes. This means that there will be a delay between the most recent data retrieved and the point in time that I make the request. Second, because I will be receiving a batch of data, I don't want to just fire out all the events down a socket once my fetcher has retrieved it: I would like to spread out the events so that they are both accurately spaced amongst each other and in sync with their original occurrences (e.g. an event is always displayed 6 minutes after it actually happened). My thought is to fetch the data every 5 minutes from the source, knowing that I won't get the very latest data. The original data would be then queued to be sent down the socket 7.5 minutes from its original timestamp – that is, at least ~2.5 minutes from when its batch was fetched and at most 7.5 minutes since then. My question is this: is this the best way to approach the problem? Does this problem have any standard approaches or associated literature related to implementation best-practices and edge cases? I am a bit worried that the frequency of my fetches and the frequency in which the source is updated will get out of sync, leading to points where no data will be retrieved from the source. However, since my socket delay is greater than my fetch frequency, the subsequent fetch should retrieve newer data before the socket queue is empty. Is that correct? Am I missing something? Thanks!

    Read the article

  • How to improve my backup strategy (rsync)?

    - by GUI Junkie
    I've seen the QAs about backup solutions, but I'm asking anyway. One because it's a personal situation I haven't solved yet, and two, because the answer can be useful for others. My situation is rather simple. I have two computers with two users and one external hard-drive. I want to sync/backup a shared directory. Currently I use rsync with the -azvu options to sync to the external drive. My problem is the round-trip. All deleted files are restored! Using rsync I'm doing Computer A --> External disk --> Computer A Computer B --> External disk --> Computer B (I should probably do External disk -- Computer A as a last step) I've seen 'bup' mentioned and other QA talk about dropbox + rsync... Another option is maybe to delete files from rsync? Can my running backup strategy be improved in some other way?

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • How do I stop video tearing? (Nvidia prop driver, non-compositing window manager)

    - by Chan-Ho Suh
    I have that problem which seemingly afflicts many using the proprietary Nvidia driver: Video tearing: fine horizontal lines (usually near the top of my display) when there is a lot of panning or action in the video. (Note: switching back to the default nouveau driver is not an option, as its seemingly nonexistent power-management drains my battery several times faster) I've tried Totem, Parole, and VLC, and tearing occurs with all of them. The best result has been to use X11 output in VLC, but there is still tearing with relatively moderate action. Hardware: MacBook Air 3,2 -- which has an Nvidia GeForce 320M. There are two common fixes for tearing with Nvidia prop drivers: Turn off compositing, since Nvidia proprietary drivers don't usually play nice with compositing window managers on Linux (Compiz is an exception I'm aware of). But I use an extremely lightweight window manager (Awesome window manager) which is not even capable of compositing (or any cool effects). I also have this problem in Xfce, where I have compositing disabled. Enabling sync to VBlank. To enable this, I set the option in nvidia-settings and then autostart it as nvidia-settings -l with my other autostart programs. This seems to work, because when I run glxgears, I get: $ glxgears Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 303 frames in 5.0 seconds = 60.500 FPS 300 frames in 5.0 seconds = 59.992 FPS And when I check the refresh rate using nvidia-settings: $ nvidia-settings -q RefreshRate Attribute 'RefreshRate' (wampum:0.0; display device: DFP-2): 60.00 Hz. All this suggests sync to VBlank is enabled. As I understand it, this is precisely designed to stop tearing, and a lot of people's problem is even getting something like glxgears to output the correct info. I don't understand why it's not working for me. xorg.conf: http://paste.ubuntu.com/992056/ Example of observed tearing::

    Read the article

  • CS1685 Warning causes a CS0433 error when targeting 3.5 in VS2010

    - by Adam Driscoll
    I have a 2010 project that is targeting .NET v3.5. It was working fine until I started to mess with configurations a bit and now I cannot figure out what I'm doing wrong. The project doesn't have ANY references added. It won't even let me add a reference to System.Core as it is added by the 'build system'. warning CS1685: The predefined type 'System.Func' is defined in multiple assemblies in the global alias; using definition from 'c:\Windows\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll' IFilter.cs(82,49): error CS0433: The type 'System.Func' exists in both 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll' and 'c:\Windows\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll' Looks like something is grabbing onto 4.0 but I'm not quite sure how to fix it. Any one else run into this?

    Read the article

  • Setup Google Test (gtest) with Eclipse on OS X

    - by ejel
    What is the procedure to setup Google Test to work under Eclipse on Mac OS X? I followed the instruction in README to compile and install gtest as framework from XCode. Now I want to use gtest with Eclipse. Currently, it compiles fine but fails during build. I suppose Eclipse does not use framework concept as XCode does and need a different linking approach, but I'm not sure which files should I link to during build. g++ -L/usr/local/lib -L/usr/local/lib/libgtest.a -L/Library/Frameworks/gtest.framework -arch i386 -o "Raytracer" ./test/sample_test.o ./src/Raytracer.o Undefined symbols: "testing::Test::~Test()", referenced from: DemoTest_SANITY_Test::~DemoTest_SANITY_Test()in sample_test.o DemoTest_SANITY_Test::~DemoTest_SANITY_Test()in sample_test.o "testing::internal::AssertHelper::~AssertHelper()", referenced from: DemoTest_SANITY_Test::TestBody() in sample_test.o DemoTest_SANITY_Test::TestBody() in sample_test.o

    Read the article

  • What is the best Battleship AI?

    - by John Gietzen
    Battleship! Back in 2003, (when I was 17,) I competed in a Battleship AI coding competition. Even though I lost that tournament, I had a lot of fun and learned a lot from it. Now, I would like to resurrect this competition, in the search of the best battleship AI. Here is the framework: Battleship.zip The winner will be awarded +450 reputation! The competition will be held starting on the 17th of November, 2009. No entries or edits later than zero-hour on the 17th will be accepted. (Central Standard Time) Submit your entries early, so you don't miss your opportunity! To keep this OBJECTIVE, please follow the spirit of the competition. Rules of the game: The game is be played on a 10x10 grid. Each competitor will place each of 5 ships (of lengths 2, 3, 3, 4, 5) on their grid. No ships may overlap, but they may be adjacent. The competitors then take turns firing single shots at their opponent. A variation on the game allows firing multiple shots per volley, one for each surviving ship. The opponent will notify the competitor if the shot sinks, hits, or misses. Game play ends when all of the ships of any one player are sunk. Rules of the competition: The spirit of the competition is to find the best Battleship algorithm. Anything that is deemed against the spirit of the competition will be grounds for disqualification. Interfering with an opponent is against the spirit of the competition. Multithreading may be used under the following restrictions: No more than one thread may be running while it is not your turn. (Though, any number of threads may be in a "Suspended" state). No thread may run at a priority other than "Normal". Given the above two restrictions, you will be guaranteed at least 3 dedicated CPU cores during your turn. A limit of 1 second of CPU time per game is allotted to each competitor on the primary thread. Running out of time results in losing the current game. Any unhandled exception will result in losing the current game. Network access and disk access is allowed, but you may find the time restrictions fairly prohibitive. However, a few set-up and tear-down methods have been added to alleviate the time strain. Code should be posted on stack overflow as an answer, or, if too large, linked. Max total size (un-compressed) of an entry is 1 MB. Officially, .Net 2.0 / 3.5 is the only framework requirement. Your entry must implement the IBattleshipOpponent interface. Scoring: Best 51 games out of 101 games is the winner of a match. All competitors will play matched against each other, round-robin style. The best half of the competitors will then play a double-elimination tournament to determine the winner. (Smallest power of two that is greater than or equal to half, actually.) I will be using the TournamentApi framework for the tournament. The results will be posted here. If you submit more than one entry, only your best-scoring entry is eligible for the double-elim. Good luck! Have fun! EDIT 1: Thanks to Freed, who has found an error in the Ship.IsValid function. It has been fixed. Please download the updated version of the framework. EDIT 2: Since there has been significant interest in persisting stats to disk and such, I have added a few non-timed set-up and tear-down events that should provide the required functionality. This is a semi-breaking change. That is to say: the interface has been modified to add functions, but no body is required for them. Please download the updated version of the framework. EDIT 3: Bug Fix 1: GameWon and GameLost were only getting called in the case of a time out. Bug Fix 2: If an engine was timing out every game, the competition would never end. Please download the updated version of the framework. EDIT 4: Results!

    Read the article

  • ExtAsp or Coolite - ASP.NET wrappers around ExtJs

    - by Jon
    Hi, We are a small Microsoft shop looking into ExtJs and like the rapid building of complex and structured UIs that can be achieved with the toolkit. However we have been experimenting with ExtAsp.NET (CodePlex) which is an opensource layer of ASP.NET code which wraps around the ExtJs framework. We have also noticed the Coolite framework which looks good too and does the same thing. We have 2 options, either we purchase the ExtJs license which will be required if we use ExtAsp, or we purchase the Coolite kit which includes the ExtJs license. It looks like Coolite is actually it little cheaper than the ExtJs for some reason?? However, is it a little more risky as regards upgrade path if the Coolite framework becomes unsupported, whereas ExtAsp as an open source solution will have community backing? Just looking to make the right step.

    Read the article

  • Selenium and Headless Environment

    - by sdmythos_gr
    I recently installed Python 2.7, Robot Framework and the Selenium Library (I still don't know if I succeded though...) on a Red Hat Server to run some test on a web application. So I tried a simple test case using the robot framework to see if Selenium Library is functional, just to Open a web page, nothing more... Selenium Server is up and running according to the result of ps, and firefox binaries are in the PATH... Running the test case from the Robot Framework (with the pybot testcasename.tsv) I get an exception: ERROR: Problem capturing a screenshot to string: java.awt.AWTException: headless environment So, what is the Headless Environment? Does anyone have an idea if there is something else that needs to be istalled or to be configured as well?

    Read the article

  • Error: could not locate an NSManagedObjectModel for entity name 'TAB_RSS'

    - by Stephen
    Hello, Does anyone know what does the following error mean: +entityForName: could not locate an NSManagedObjectModel for entity name 'TAB_RSS' I noticed my frameworks were highlighted in red UIKIT.framework, Foundation.framework and CoreGraphics.framework. I've now added these but am getting a warning message stating "missing required architecture i386 in file". I think this may be related to the above error. I found another post that may help me http://stackoverflow.com/questions/1...e-i386-in-file but I can't find my project.pbxproj file. I think i need to edit this and remove references to the FRAMEWORK_SEARCH_PATHS. Stephen

    Read the article

  • Naming conventions and field naming question for CakePHP

    - by jphenow
    Okay so two questions very related: 1) Does following the naming convention for classes, controllers, database fields, etc. affect the framework's ability to work the way it was intended? (I'm a little new to working with a framework from the beginning of app development) 2) This question is more important if 1 is a yes. Say I have a table, A, that has 2 foreign keys pointing at the same table, B, but different entries (they're like edges of a graph that point at two vertices) how would I follow the naming convention of their database fields? All I can think to do is something like vertex_1_id and vertex_2_id but I don't know how the framework would handle that if the naming conventions are necessary for its functioning correctly.

    Read the article

  • How to get VS2010 Web.config Transformations working from NAnt?

    - by jmcd
    In my Nant file I've got (paths shortened): <echo message="#### TARGET - compile ####"/> <echo message=""/> <echo message="Build Directory is ${build.dir}" /> <exec program="${framework}\msbuild.exe" commandline="..\src\Solution.sln /m /t:Clean /p:Configuration=Release" /> <exec program="${framework}\msbuild.exe" commandline="..\src\Solution.sln /m /t:Rebuild /p:Configuration=Release" /> <exec program="${framework}\msbuild.exe" commandline="..\src\Solution.sln /m /t:TransformWebConfig /p:Configuration=Release" /> Which results in: Build FAILED. "C:\..\src\Solution.sln" (TransformWebConfig target) (1) -> C:\..\src\Solution.sln.metaproj : error MSB4057: The target "TransformWebConfig" does not exist in the project. [C:\..\src\Solution.sln] 0 Warning(s) 1 Error(s)Time Elapsed 00:00:00.05 The solution and associated projects are all VS2010 and the Web Application even has the correct reference in the .csproj: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> Shouldn't this just work?

    Read the article

  • Python urllib.urlopen IOError

    - by Michael
    So I have the following lines of code in a function sock = urllib.urlopen(url) html = sock.read() sock.close() and they work fine when I call the function by hand. However, when I call the function in a loop (using the same urls as earlier) I get the following error: > Traceback (most recent call last): File "./headlines.py", line 256, in <module> main(argv[1:]) File "./headlines.py", line 37, in main write_articles(headline, output_folder + "articles_" + term +"/") File "./headlines.py", line 232, in write_articles print get_blogs(headline, 5) File "/Users/michaelnussbaum08/Documents/College/Sophmore_Year/Quarter_2/Innovation/Headlines/_code/get_content.py", line 41, in get_blogs sock = urllib.urlopen(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 87, in urlopen return opener.open(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 203, in open return getattr(self, name)(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 314, in open_http if not host: raise IOError, ('http error', 'no host given') IOError: [Errno http error] no host given Any ideas?

    Read the article

  • NUnit-console 2.5.4 not capable of running multiple assemblies?

    - by Per Salmi
    I am having problems running tests with the command line NUnit test runner. I am using version 2.5.4 with .NET 4 on an x64 machine. Using the following line results in a failure "Could not load file or assembly 'bar' or one of its dependencies. The system cannot find the file specified." nunit-console-x86 foo.dll bar.dll /framework=4.0.30319 If I reverse the dll file names it complains about not finding 'foo' instead... It works if I run them separately like: nunit-console-x86 foo.dll /framework=4.0.30319 Also the tests of the second file works if I run: nunit-console-x86 bar.dll /framework=4.0.30319 Before upgrading our projects to 4.0 we used NUnit 2.5.2 and the same command line tool options and at that point the runner worked well with multiple assemblies. It seems like the ability to run tests on multiple files at the same time is broken... Anyone that can see the same behavior or does it work indicating that my environment is somehow broken? /Per

    Read the article

  • How to learn high-level Java web development concepts

    - by titaniumdecoy
    I have some experience writing web applications in Java for class projects. My first project used Servlets and my second, the Stripes framework. However, I feel that I am missing the greater picture of Java web development. I don't really understand the web.xml and context.xml files. I'm not sure what constitutes a Java EE application as opposed to a generic Java web application. I can't figure out how a bean is different from an ordinary Java class (POJO?) and how that differs from an Enterprise Java Bean (EJB). These are just the first few questions I could think of, but there are many more. What is a good way to learn how Java web applications function from the top down rather than simply how to develop an application with a specific framework? (Is there a book for this sort of thing?) Ultimately, I would like to understand Java web applications well enough to write my own framework.

    Read the article

  • UIKit/UiKit.h missing, on a newer version

    - by letsee
    Dear Everyone, I've a application which has been written for 2.1. Now I'm running that app on xcode 3.2.5 and SDK 4.2. Here's the problem, When I try to Build and Run, I get the following error: UIKit.framework/UIKit.h: No such file or directory In file included from users/.../classes/Radio.m UIKit.framework/UIKit.h: no such file or directory in users/.../classes/Radio.h I don't know why I'm facing that error, because the UIKit.framework is included in my projects "Frameworks" group. I've updated the OS target and other similar options, and application runs clearly without UIKit. I would appreciate it if anyone could help me through. Regards,

    Read the article

  • Searching for a Kohana Beginner's Tutorial for PHP

    - by Andreas Grech
    I am going to try to build a PHP website using a framework for the first time, and after some research here and there, I've decided to try to use Kohana I downloaded the source from their website, and ran the downloaded stuff on my web server, and was then greeted with a 'Welcome to Kohana!' page, and nothing more... I've tried to find some beginner tutorials on the web as regard this particular framework, but to my surprise, came up with almost nothing (only this one, but it's not a great deal of help) I am not new to PHP and neither am I new to the MVC concept, but I am very new to PHP Frameworks...so can anyone point me to a Kohana tutorial somewhere on the web that will help me get started in building my website using this framework, from scratch ? P.S. As I said, I want a beginners tutorial as regarding this case. [UPDATE] I am currently reading the Official Guide...we'll see how that goes.

    Read the article

  • Problem with Gallio and TeamCity and the new Visual Studio 2010 release

    - by Bernard Larouche
    I am running TeamCity on a virtual machine. I have installed the new Visual Studio 2010 release yesterday and converted my VS 2008 projects. I also have installed .NET Framework 4 on my virtual machine. Before yesterday all my projects were building succesfully on the CI server but since I installed VS 2010 I get the following error message : error MSB5014: File format version is not recognized. MSBuild can only read solution files between versions 7.0 and 9.0, inclusive. I did change my config on Team City to take into account the new .NET 4 framework : Build Runner : MSBuild Build File Path : CFT.msbuild MSBuild version : Microsoft.NET Framework 4.0 MSBuild ToolsVersion : 4.0 Run Platform : x86 I think it has something to do with the fact that now MSBuild must refer to .NET 4 framwork but it seems that it keeps refering to 2.0.

    Read the article

  • Wordpress form handling

    - by Ron
    I need to add a basic form page in the website, that runs on WordPress framework. I have the following raw materials ready: Client side: html form layout,css and jquery validation code. Server side: form handler php function that processes the $_POST[] data. My problem is to integrate this code in the Wordpress framework. I have looked at some plugins but they are doing much more than I would like and also they have their own validation which is cumbersome to change. Could anyone suggest a good form plugin that allows just the framework hooks ? Or is it worthwhile that I should write the plugin myself. Thanks.

    Read the article

  • TextMate can't find my RSpec gem in opt (from macports)

    - by sbwoodside
    I know I've had this problem before so I'm really frustrated. I've got the Ruby RSpec bundle installed for TextMate, but when I Run Behaviour Description or Run Focused Example I get this wonderful error: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rubygems.rb:827: in `report_activate_error': Could not find RubyGem rspec (>= 1.1.0) (Gem::LoadError) from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rubygems.rb:261: in `activate' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby /1.8/rubygems.rb:68:in `gem' from /Users/simon/Library/Application Support/TextMate/Bundles/Ruby RSpec.tmbundle/Support/lib/spec_mate.rb:13 from /tmp/temp_textmate.oWRPUR:3:in `require' from /tmp/temp_textmate.oWRPUR:3 (I added linebreaks to make it readable) I'm using macports so my rspec gem is installed in /opt/local/lib/ruby/gems/1.8/gems/. Why isn't it finding it? In Preferences Advanced Shell Variables my TM_RUBY is set to /opt/local/bin/ruby. I also tried the trick here: http://dnite.org/2007/8/28/textmate-and-your-environment-variables/ ... which didn't do anything.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >