Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 494/965 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • What are some Java patterns well-suited for fast, algorithmic coding?

    - by Casey Chu
    I'm in college, and I've recently started competing in programming competitions with my friends. These competitions involve solving algorithmic problems quickly. It's a lot of fun, but there's one problem: I'm forced to use Java. (My teammates use Java.) Background: I'm a self-taught JavaScript programmer, and it hurts to write Java code. I find it very verbose and inflexible, and I feel slowed down when having to declare types and decide which of the eighty list data structure to use. I'm also frustrated about the lack of functional programming features and how verbose using regular expressions, arrays, and dictionaries are. As an example, consider the problem of finding the length of the longest string of consecutive characters in a given string. So the string XX22BBBBccXX222 would give 4, for the string of four Bs. In Java, I'd have to loop through and manually count characters and manually keep track of the maximum. (That's at least as far as I'm aware -- I'm not as familiar with Java as I am with JavaScript.) In JavaScript, I'd find it like this: var max = Math.max.apply(Math, str.match(/(.)\1*/g).map(function (s) { return s.length; })); Much quicker and simpler, in my book. The question: what are some Java features, techniques, or patterns well-suited for fast, algorithmic coding?

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • A network share folder is invisible to users

    - by Myrddin Emrys
    I have a network share folder that I was recently cleaning up permissions to. I took off the four individual names from the access permissions to the folder, and added a new security group (Universal) with standard Read/Write permissions to that folder, then added those 4 people to the group. However... now nobody can see the folder. The users can see the other 9 folders in that shared drive, but the 10th is missing. I cannot see any security permission in the parent folder or in the folder itself which would cause it to be invisible to anyone, regardless of whether they have permission to open it or edit files within.

    Read the article

  • Recursive Batch File

    - by MCZ
    I have a file that looks this: head1,head2,head3,head4,head5,head6 a11,a12,keyA,a14,a15,a16 a21,a22,keyB,a24,a25 a31,a32,keyC,a34 a41,a42,keyB,a44,a44 a51,a52,keyA,a54,a55,a56 a61,a62,keyA,a64,a65,a66 a71,a72,keyC,a74 some message Objective: Write list of unique keys to a text file. For example, the result for the file described above should be: keyA, keyB, keyC Here's the pseudocode I would like to implement in batch file recur.bat Read second line of inputfile If no key exist on second line, return else continue Append keyX to list FINDSTR /v keyX inputfile Pipe results to recur.bat I don't know if this is the most efficient way to do this without using actual programming language. Any suggestions for actual batch file code?

    Read the article

  • Implementing a ILogger interface to log data

    - by Jon
    I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List<string> PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock<IFileLogger>(); mock.Setup(x => x.Log(Is.Any<string>).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times("1,2,3"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?

    Read the article

  • FreeBSD Server .htaccess issues

    - by Will Ayers
    Server Details: FreeBSD PHP Version 4.3.11 Apache Appache Modules: mod_throttle, mod_php4, mod_speedycgi, mod_ssl, mod_setenvif, mod_so, mod_unique_id, mod_headers, mod_expires, mod_auth_db, mod_auth_anon, mod_auth, mod_access, mod_rewrite, mod_alias, mod_actions, mod_cgi, mod_dir, mod_autoindex, mod_include, mod_info, mod_status, mod_negotiation, mod_mime, mod_mime_magic, mod_log_config, mod_define, mod_env, mod_vhost_alias, mod_mmap_static, http_core The issue I am having is when ever I write any kind of code in the .htaccess file, it throws a 500 Internal error I am simply trying to rewrite url's and am using the exact code that wordpress creates for me and even tried custom code used before on previous servers and it still does not work. WordPress created code: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /lobster-tail-blog/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /lobster-tail-blog/index.php [L] </IfModule> # END WordPress And even a simple thing like this throws the error: <IfModule mod_rewrite.c> RewriteEngine On </IfModule> Anyone know of any fixes or why this is causing this error? I have the mod_rewrite module loaded

    Read the article

  • Moving all UI logic to Client Side?

    - by Mag20
    Our team originally consisted of mostly server side developers with minimum expertise in Javascript. In ASP.NET we used to write a lot of UI logic in code-behind or more recently through controllers in MVC. A little while ago 2 high level client side developers joined our team. They can do in HTMl/CSS/Javascript pretty much anything that we could previously do with server-side code and server-side web controls: Show/hide controls Do validation Control AJAX refreshing So I started to think that maybe it would be more efficient to just create a high level API around our business logic, kinda like Amazon Fulfillment API: http://docs.amazonwebservices.com/fws/latest/APIReference/, so that client side developers would fully take over the UI, while server side developers would only concentrate on business logic. So for ordering system you would have a high level API like: OrderService.asmx CreateOrderResponse CreateOrder(CreateOrderRequest) AddOrderItem AddPayment - SubmitPayment - GetOrderByID FindOrdersByCriteria ... There would be JSON/REST access to API, so it would be easy to consume from client-side UI. We could use this API for both internal UI development and also for 3-rd parties to create their own applications. With advances in Javascript and availability of good client side developers, is it a good time to get rid of code-behind/controllers and just concentrate on developing high level APIs (ala Amazon) that client side developers can consume?

    Read the article

  • Server 2012 Storage Pools, Raid Controller... can the Storage Pool deal with it?

    - by TomTom
    Before trying it out - I don't find any documentation. Given that Storage Pools have serious performance problems with parity, and do not rebalance data at the moment when you add discs, my preferred way to use them would be as think provisioned space, ISCSI targets - with every "Pool" running against 1 RAID that comes from a Raid controller (who also introduces SSD read and write caching - another thing missing from Storage Pools). The main question is - how does a Storage Pool handle the change in the underlying disc that can happen? I mostly talk about OCE (Online Capacity Expansion), where a disc after an expansion suddenly reports a larger space. Standard Windows allows you to use this additional space (and expand the partitions). How does a storage pool handle it?

    Read the article

  • Wordnik Accelerator

    - by prabhpreet
    Wow, creating IE Accelerators is superbly easy. If you want to learn how to create one, go here (some MSDN blog) and the MSDN documentation (clearly written). I was fed up of dictionary.com bringing all those popups and the stupid definitions of Google's dictionary. So I decided to scratch my own itch. I randomly stumbled on the site called Wordnik and it provides with all examples plus definitions plus lots more for words and its popup-free (as far as I know). So I decided to write and accelerator. Here is the source code (Yes, this is it): <?xml version="1.0" encoding="utf-8"?> <os:openServiceDescription xmlns:os="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.wordnik.com</os:homepageUrl> <os:display> <os:name>View on Wordnik</os:name> <os:description>Looking up words on an awesome word site called Wordnik </os:description> <os:icon>http://www.wordnik.com/favicon.ico</os:icon> </os:display> <os:activity category="Define"> <os:activityAction context="selection"> <os:execute method="get" action="http://www.wordnik.com/words/{selection}" ></os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> That’s it. To get it, go here. Enjoy!

    Read the article

  • Multi Gateway and Backup Routing on a cisco router

    - by user64880
    Hi all, I have a 2611 Cisco Router with only one Fastethernet port Now I have two internet gateways. I want to config my router as when primary routing fails second routing automatically start to route all my packets. When I set 2 IP route command in my router then I check I see it work well but when peer IP on primary routing is down it can not change to second routing until I remove first route command.In the following I write my setting. How can I set it? interface FastEthernet0/0 ip address 81.12.21.100 255.255.255.248 secondary ip address 62.220.97.14 255.255.255.252 ip route 0.0.0.0 0.0.0.0 62.220.97.13 ip route 0.0.0.0 0.0.0.0 81.12.21.97 100 Cheer, Kamal

    Read the article

  • local user cannot access vsftpd server

    - by Zloy Smiertniy
    I'm currently running a vsftpd server and I added the necessary configurations in vsftpd.conf so that local users can use clients like FileZilla to manage their homes in a server. I found out that only users in the sudoers list access without a problem only they can't download the files, but users that are not sudoers cannot even access their homes from a client but they can access by a web browser using the FTP protocol and they can only access their home directories (as intented) Im running a fedora 14 on my server and my vsftpd.conf looks like this: # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to GAMBITA FTP service # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES use_localtime=YES Anyone has an idea of what might be happening? Nothing concerning vsftpd is written in any log

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • In MVC , DAO should be called from Controller or Model

    - by tito
    I have seen various arguments against the DAO being called from the Controller class directly and also the DAO from the Model class.Infact I personally feel that if we are following the MVC pattern , the controller should not coupled with the DAO , but the Model class should invoke the DAO from within and controller should invoke the model class.Why because , we can decouple the model class apart from a webapplication and expose the functionalities for various ways like for a REST service to use our model class. If we write the DAO invocation in the controller , it would not be possible for a REST service to reuse the functionality right ? I have summarized both the approaches below. Approach #1 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); new CustomerDAO().save(customer); } } Approach #2 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); customer.save(customer); } } public class Customer { ........... private void save(Customer customer){ new CustomerDAO().save(customer); } } Note- Here is what a definition of Model is : Model: The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In event-driven systems, the model notifies observers (usually views) when the information changes so that they can react. I would need an expert opinion on this because I find many using #1 or #2 , So which one is it ?

    Read the article

  • Retrieving a specific value from "df -h" using shell

    - by Diego Dias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • Where to Start?

    - by freemann098
    my name is Chase. I've been programming for over 3 years now and I've made very little progress towards game development. I blame myself for it due to reasons. I have experience in many languages such as C++, C#, and Java. I have a little bit of knowledge in JavaScript/HTML and Python. My question is where to start on actually understanding jumping into game development. Whenever I watch game development tutorials it mostly makes sense until points of things like OpenGL or advanced topics that make no sense at all. An example is something like glOrhho Matrix or whatever. Videos either don't explain things like this or they're not explained very well. Do I not know enough basics? I find myself always copying code from a video but understanding very little of it. It's like i'm memorizing things I don't understand which makes it hard to program at all. If I were to want to get to the point where I could write my own game engine or just a game by myself in general in C++ using at the most documentation how would I start at mastering to that level. Should I learn C first, or get really good at basics in general with C++. I know there is a similar posted question on this site but it's not the same due to the fact the person asking the question has a well knowledge level in programming. I'm stuck in a loop of learning the same things but if I go farther I don't understand. I'm stuck in the same spot and need to make progress.

    Read the article

  • Customers Deploying Sun Oracle Database Machine

    - by kimberly.billings
    Philippine Savings Bank (PS Bank) recently deployed the Sun Oracle Database Machine to underpin its enterprise-wide analytics platform. Now, the response times for queries and requests that used to take from three hours to several days is completed in less than one minute with near real-time updates. Read the press release. EFU General Insurance also announced this week that they have deployed the Sun Oracle Database Machine. With Oracle, EFU will be able to open more sales channels via the Web and facilitate integration with other companies. As a result, more quality services can be offered to its customers via the Web because of the more agile and reliable IT infrastructure. In addition, a centralized IT environment will offer the EFU management a real time view of key information, enabling EFU to analyze business trends and make timely decisions. Read the press release. Let us know about your Sun Oracle Database deployment! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Is this an acceptable approach to undo/redo in Python?

    - by Codemonkey
    I'm making an application (wxPython) to process some data from Excel documents. I want the user to be able to undo and redo actions, even gigantic actions like processing the contents of 10 000 cells simultaneously. I Googled the topic, and all the solutions I could find involves a lot of black magic or is overly complicated. Here is how I imagine my simple undo/redo scheme. I write two classes - one called ActionStack and an abstract one called Action. Every "undoable" operation must be a subclass of Action and define the methods do and undo. The Action subclass is passed the instance of the "document", or data model, and is responsible for committing the operation and remembering how to undo the change. Now, every document is associated with an instance of the ActionStack. The ActionStack maintains a stack of actions (surprise!). Every time actions are undone and new actions are performed, all undone actions are removed for ever. The ActionStack will also automatically remove the oldest Action when the stack reaches the configurable maximum amount. I imagine the workflow would produce code looking something like this: class TableDocument(object): def __init__(self, table): self.table = table self.action_stack = ActionStack(history_limit=50) # ... def delete_cells(self, cells): self.action_stack.push( DeleteAction(self, cells) ) def add_column(self, index, name=''): self.action_stack.push( AddColumnAction(self, index, name) ) # ... def undo(self, count=1): self.action_stack.undo(count) def redo(self, count=1): self.action_stack.redo(count) Given that none of the methods I've found are this simple, I thought I'd get the experts' opinion before I go ahead with this plan. More specifically, what I'm wondering about is - are there any glaring holes in this plan that I'm not seeing?

    Read the article

  • how can I use vim as a wordprocess with soft wordwrap and columns only 40?

    - by Angela
    I have been trying to use vim as a wordprocessor to do most of my initial drafting and then open in abiword or open office to do my final printing. Problem has been, even with :set linebreak and :set wrap, when I open it, there's a ton of line breaks. I have to go and manually remove them. What I want is to be able to write at 40 columns wrap around softly so that when I open it up in Open Office, it gets restored to whatever that column width is, preserving things like carriage return but without all those linebreaks.

    Read the article

  • Background process text appears in terminal vim

    - by Jezen Thomas
    First time poster, long time lurker, searched, couldn’t find etc, etc. I’m running vim in tmux, in iTerm2. I’m running a server with Grunt.js, which I have running in the background, out of my way. I start my grunt server in the background like this: grunt server & Grunt also watches a bunch of files, and runs some tasks when any of the watched files have been written to. The problem is, when I am in vim and I write a file, the output from grunt starts rendering in vim! Here are some screenshots to illustrate the problem: Before writing the file: And after writing the file: What have I tried? I’ve tried running a ‘stock’ vim by starting with this: vim -u NONE …But the problem remains. This suggests to me that the problem is not with my .vimrc. Perhaps it’s an issue with iTerm2, I don’t know. Help.

    Read the article

  • How do i enable deflate on apache2? debian

    - by acidzombie24
    I looked at the other answers and those solutions did not help. I am completely confused how to do this. I know almost nothing about apache nor how to use it and nearly all the article say write this but doesnt tell me where. So far i have done this # a2enmod deflate Module deflate already enabled Then i tried writing this in httpd.conf AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css Then i wrote a longer version which involved deflate.c. After those didnt work i deleted them all and wrote the AddOutputFilterByType line in apache2/sites-enabled/000-default inside of I used this to test http://www.whatsmyip.org/http_compression/ everytime it says no compression. I used another site but i could never compress. Also i restart the server everytime i save a file so what could be the problem and how do i enable this properly?

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • Best filesystem for an external drive? ExFAT?

    - by GiH
    What is the best filesystem for use with multiple OS's? I saw this question but its old and doesn't take into account ExFAT. Here is what I know from my findings: NTFS - Can't write from Mac FAT32 - Doesn't support files larger than 4GB HFS - Only mac ExFAT - ??? I don't know much about this but it seems like it got rid of the 4GB limit of FAT32 and can be read on Windows, Mac, and Linux. Is ExFAT the best bet? I tried formatting the drive in ExFAT just now, but on Windows XP SP3 it was showing up as not formatted... It seems to me like FAT32 is still the best, but I wanted to see what other people had to say.

    Read the article

  • How can I pass an external instance to the constructor of an object that's being created using the default XNA XML content loader?

    - by Michael
    I'm trying to understand how to use the XNA XML content importer to instantiate non-trivial objects that are more than a collection of basic properties (e.g., a class that inherits from DrawableGameObject or GameObject and requires other things to be passed into its constructor). Is it possible to pass existing external instances (e.g., an instance of the current Game) to the constructor of an object that's being created using the default XNA XML content loader? For example, imagine that I have the following class, inheriting from DrawableGameComponent: public class Character : DrawableGameComponent { public string Name { get; set; } public Character(Game game) : base(game) { } public override void Update(GameTime gameTime) { } public override void Draw(GameTime gameTime) { } } If I had a simple class that did not need other parameters in its constructor (i.e., the Game instance), then I could simply use this XML: <XnaContent> <Asset Type="MyNamespace.Character"> <Name>John Doe</Name> </Asset> </XnaContent> ...and then create an instance of Character using this code: var character = Content.Load<Character>("MyXmlAssetName"); But that won't work because I need to pass the need to pass the Game into the constructor. What's the best way to handle this situation? Is there a way to pass in things like the current Game using the default XNA XML content loader? Do I need to write my own XML loader? (If so, how?) Is there a better object-oriented design that I should be using for my classes? Note: Although I used Game in this example, I'm really just asking how to pass any type of existing instance to my constructors. (For example, I'm using the Farseer Physics Engine, and some of my classes also need a reference to the Farseer World object too.) Thanks in advance.

    Read the article

  • TextMate - completion using an external file or file contained in project?

    - by Neil Baldwin
    Does anyone know how to get TextMate to search an external file (or even the files contained in a TextMate "project") with which to perform word completion? I'm coding some stuff on the C64 (using TextMate to write the code) and I have an external file containing labels for all of the hardware registers/kernal routines e.g VIC2InteruptStatus = $D019 It would be really handy to be able to type, say, 'VIC2I' then press the key for word completion and have TextMate find matches in the external library file. Rather than how I'm having to do it at the moment by opening the library file and copy-paste the register names into my code.

    Read the article

  • Is this method of writing Unit Tests correct?

    - by aspdotnetuser
    I have created a small C# project to help me learn how to write good unit tests. I know that one important rule of unit testing is to test the smallest 'unit' of code possible so that if it fails you know exactly what part of the code needs to fixed. I need help with the following before I continue to implement more unit tests for the project: If I have a Car class, for example, that creates a new Car object which has various attributes that are calculated when its' constructor method is called, would the two following tests be considered as overkill? Should there be one test that tests all calculated attributes of the Car object instead? [Test] public void CarEngineCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.GreaterOrEqual(car.Engine, 1); } [Test] public void CarNameCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.IsNotNull(car.Name); } Should I have the above two test methods to test these things or should I have one test method that asserts the Car object has first been created and then test these things in the same test method?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >