Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 358/467 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Nhibernat mapping aspnet_Users table

    - by Michael D. Kirkpatrick
    My User table I want to map to aspnet_Users: <class name="User" table="`User`"> <id name="ID" column="ID" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <property name="UserId" column="UserId" type="Guid" not-null="true" /> <property name="FullName" column="FullName" type="String" not-null="true" /> <property name="PhoneNumber" column="PhoneNumber" type="String" not-null="false" /> </class> My aspnet_Users table: <class name="aspnet_Users" table="aspnet_Users"> <id name="ID" column="UserId" type="Guid" /> <property name="UserName" column="UserName" type="string" not-null="false" /> </class> I tried adding one-to-one, one-to-many and many-to-one mappings. The closest I can get is with this error: Object of type 'System.Guid' cannot be converted to type 'System.Int32'. How do I create a 1 way mapping from User to aspnet_User via the UserId column in User? I am only wanting to create a reference so I can extract read-only information, affect sorts, etc. I still need to leave UserId column in User set up like it is now. Maybe a virtual reference keying off of UserId? Is this even possible with Nhibernate? Unfortunately it acts like it only wants to use ID from User to map to aspnet_Users. Changing the table User to have it's primary key be UserId instead of ID is not an option at this point. Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • Why doesn't the F# Set implement ISet<T>?

    - by Sean Devlin
    The .NET Framework is adding an ISet<T> interface with the 4.0 release. In the same release, F# is being added as a first-class language. F# provides an immutable Set<'T> class. It would seem logical to me that the immutable set provided would implement the ISet<T> interface, but it doesn't. Does anyone know why? My guess is that they didn't want to implement an interface intended to be mutable, but I don't think this explanation holds up. After all, their Map<'Key, 'Value> class implements IDictionary, which is mutable. And there are examples elsewhere in the framework of classes implementing interfaces that are only partially appropriate. My other thought is that ISet<T> is new, so maybe they didn't get around to it. But that seems kind of thin. Does the fact that ISet<T> is generic (v. IDictionary, which is not) have anything to do with it? Any thoughts on the matter would be appreciated.

    Read the article

  • Creating LINQ to SQL Data Models' Data Contexts with ASP.NET MVC

    - by Maxim Z.
    I'm just getting started with ASP.NET MVC, mostly by reading ScottGu's tutorial. To create my database connections, I followed the steps he outlined, which were to create a LINQ-to-SQL dbml model, add in the database tables through the Server Explorer, and finally to create a DataContext class. That last part is the part I'm stuck on. In this class, I'm trying to create methods that work around the exposed data. Following the example in the tutorial, I created this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MySite.Models { public partial class MyDataContext { public List<Post> GetPosts() { return Posts.ToList(); } public Post GetPostById(int id) { return Posts.Single(p => p.ID == id); } } } As you can see, I'm trying to use my Post data table. However, it doesn't recognize the "Posts" part of my code. What am I doing wrong? I have a feeling that my problem is related to my not adding the data tables correctly, but I'm not sure. Thanks in advance.

    Read the article

  • PHP Switch and Login

    - by Steve Rivera
    I'm fairly new with PHP and I am messing around with a login/registration system. I setup my sample website using a PHP-SWITCH script I found a while back: <?php switch($_GET['id']) { default: include('home.php'); /* LOGIN PAGES */ break; case "register_form": include ('includes/user_system/register_form.php'); } ? On the registration page the form links to my "register.php" which checks the validity of the form and to check for any blank fields and so on. "register.php" is supposed to refresh the page and add a reason to what the user did wrong when submitting the form. On my "register_form.php" page, which holds the actual form. This field is hidden until the user makes a mistake. <?php if (isset($reg_error)) { ?> , please try again. My "register.php" checks the form for all the errors. Here's the bit of code that will refresh the page with the reason for the error: // Check if any of the fields are missing if (empty($_POST['username']) || empty($_POST['password']) || empty($_POST['confirmpass'])) { // Reshow the form with an error $reg_error = 'One or more fields missing'; include 'register_form.php'; Now after I submit the form without any fields filled out I get the error code, but it refreshes to the actual "register_form.php". The problem with this is that because of my PHP-SWITCH script (helps me manage the site a lot easier) I don't have any formatting on that page. The actual URL to my "register_form.php" would be: "index.php?id=register_form.php". Now I have tried several different things such as changing it to: include 'index.php?id=register_form.php' And also changing it to: header(location:index.php?id=register_form.php') Unfortunately all this does is refresh the page without the reason for the error. I know this can be easily solved by just adding a Javascript Validator but I'd like to know if it is possible to refresh the page with the error using either "include" or "header()" while having a PHP-SWITCH script on the website.

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • How do I recover from pushing a gitosis.conf file with parsing errors due to line breaks?

    - by Kasia
    I have successfully set up gitosis for an Android mirror (containing multiple git repositories). While adding a new .git path following writable= in gitosis.conf I managed to insert a few line breaks. Saved, committed and pushed to server when I received the following parsing error: Traceback (most recent call last): File "/usr/bin/gitosis-run-hook", line 8, in load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-run-hook')() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 24, in run return app.main() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 38, in main self.handle_args(parser, cfg, options, args) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 75, in handle_args post_update(cfg, git_dir) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 33, in post_update cfg.read(os.path.join(export, '..', 'gitosis.conf')) File "/usr/lib/python2.5/ConfigParser.py", line 267, in read self._read(fp, filename) File "/usr/lib/python2.5/ConfigParser.py", line 490, in _read raise e ConfigParser.ParsingError: File contains parsing errors: ./gitosis-export/../gitosis.conf (...) I have removed the line break and amendend the commit by git commit -m "fix linebreak" --amend However git push still yields the exact same error. It leads me to believe gitosis is preventing me from doing any further pushes. How do I recover from this?

    Read the article

  • Visual Studio 2008: Can't connect to known good TFS 2010 beta 2

    - by p.campbell
    A freshly installed TFS 2010 Beta 2 is at http://serverX:8080/tfs. A Windows 7 developer machine with VS 2008 Pro SP1 and the VS2008 Team Explorer (no SP). The TFS 2008 Service Pack 1 didn't work for me - "None of the products that are addressed by this software update are installed on this computer." The developer machine is able to browse the TFS site at the above URL. The Issue is around trying to add the TFS server into the Team Explorer window in Visual Studio 2008. Here's a screenshot showing the error: unable to connect to this Team Foundation Server. Possible reasons for failure include: The Team Foundation Server name, port number or protocol is incorrect. The Team Foundation Server is offline. Password is expired or incorrect. The TFS server is up and running properly. Firewall ports are open, and is accessible via the browser on the dev machine!! larger image Question: how can you connect from VS 2008 Pro to a TFS 2010 Beta 2 server? Resolution Here's how I solved this problem: installed VS 2008 Team Explorer as above. re-install VS 2008 Service Pack 1 when adding a TFS server to Team Explorer, you MUST specify the URL as such: http://[tfsserver]:[port]/[vdir]/[projectCollection] in my case above, it was http://serverX:8080/tfs/AppDev-TestProject you cannot simply add the TFS server name and have VS look for all Project Collections on the server. TFS 2010 has a new URL (by default) and VS 2008 doesn't recognize how to gather that list.

    Read the article

  • Teach Markup or use a WYSIWYG editor?

    - by Atomiton
    When it comes to WYSIWYG editors WYSI rarely WYG. The problem I always have is when people paste in formatted text from word. Ideally, what I'm looking for is a way for people to input text into the document while at the same time teaching them structure... I just don't know if that's a realistic goal ( compared to cut n' paste ) I'm curious if people have found using something other than WYSIWYG editors ( take SO, for example ) has worked for REAL WORLD USERS. I'm not talking about programmers, developers and experience internet users... I'm talking about your average user. I'd be interested in best practices when it comes to getting users to enter content... and I'd love it if someone could point me to some good editors/examples. there are lots of choices when it comes to WYSIWYG ( ckEditor, FreeTextBox, TinyMCE ) but I don't hear a lot about SO-like techniques. Does adding that small barrier scares users away? Is it too difficult to teach people to mark up their text? Is it easier to teach them html? Is a BBCode implementation a good idea? What are some Pros/Cons to wysiwyg/markup. What approach have others used?

    Read the article

  • how to create an function using jquery live? [Solved]

    - by Mahmoud
    Hey all i am trying to create a function that well keep the user in lightbox images while he adds to cart, for a demo you can visit secure.sabayafrah.com username: mahmud password: mahmud when you click at any image it well enlarge using lightbox v2, so when the user clicks at the image add, it well refresh the page, when i asked about it at jcart support form they informed me to use jquery live, but i dont know how to do it but as far as i tried this code which i used but still nothing is happening jQuery(function($) { $('#button') .livequery(eventType, function(event) { alert('clicked'); // to check if it works or not return false; }); }); i also used jQuery(function($) { $('input=[name=addto') .livequery(eventType, function(event) { alert('clicked'); // to check if it works or not return false; }); }); yet nothing worked for code to create those images http://pasite.org/code/572 Update 1: i have done this function adding(form){ $( "form.jcart" ).livequery('submit', function() {var b=$(this).find('input[name=<?php echo $jcart['item_id']?>]').val();var c=$(this).find('input[name=<?php echo $jcart['item_price']?>]').val();var d=$(this).find('input[name=<?php echo $jcart['item_name']?>]').val();var e=$(this).find('input[name=<?php echo $jcart['item_qty']?>]').val();var f=$(this).find('input[name=<?php echo $jcart['item_add']?>]').val();$.post('<?php echo $jcart['path'];?>jcart-relay.php',{"<?php echo $jcart['item_id']?>":b,"<?php echo $jcart['item_price']?>":c,"<?php echo $jcart['item_name']?>":d,"<?php echo $jcart['item_qty']?>":e,"<?php echo $jcart['item_add']?>":f} }); return false; } and it seems to add to jcart but yet it still refreshes

    Read the article

  • Java: Match tokens between two strings and return the number of matched tokens

    - by Cryssie
    Need some help to find the number of matched tokens between two strings. I have a list of string stored in ArrayList (example given below): Line 0 : WRB VBD NN VB IN CC RB VBP NNP Line 1 : WDT NNS VBD DT NN NNP NNP Line 2 : WRB MD PRP VB DT NN IN NNS POS JJ NNS Line 3 : WDT NN VBZ DT NN IN DT JJ NN IN DT NNP Line 4 : WP VBZ DT JJ NN IN NN Here, you can see each string consists of a bunch of tokens separated by spaces. So, there's three things I need to work with.. Compare the first token (WRB) in Line 0 to the tokens in Line 1 to see if they match. Move on to the next tokens in Line 0 until a match is found. If there's a match, mark the matched tokens in Line 1 so that it will not be matched again. Return the number of matched tokens between Line 0 and Line 1. Return the distance of the matched tokens. Example: token NN is found on position 3 on line 0 and position 5 on Line 1. Distance = |3-5| = 2 I've tried using split string and store it to String[] but String[] is fixed and doesn't allow shrinking or adding of new elements. Tried Pattern Matcher but with disasterous results. Tried a few other methods but there's some problems with my nested for loops..(will post part of my coding if it will help). Any advice or pointers on how to solve this problem this would be very much appreciated. Thank you very much.

    Read the article

  • Looking for a good dev environment for OSGi bundles

    - by Riduidel
    Hi, I'm currently investigating in the field of dev environment for OSGi bundles. My goal is to find a way to develop, test and debug with ease the bundles I'll be coding. Besides, I have some "cultural" requirements. I want to be able to use java continuous integration servers (typically, Hudson) As a consequence of that first requirement, I want to have a repeatable, one-click build process. My typical tool for that is maven. And finally, being long-term Eclipse user, and having the m2eclipse at hand to merge my eclipse env with my maven one, I obviously want to be able to test and debug with that IDE. So far, here are the infos I know I can use (and have already tested) maven-bundle-plugin, maven-ipojo-plugin which both offer clean packaging facilities I have tested maven pax (and eclipse pax) and am not really satisfied with both : maven pax generates a very heavy project, where adding dependencies is very error-prone (the maven pax:import-bundle command line, with all its arguments, is a hell per se) I have taken a look at Karaf, which seems to have some nice direct maven provisionning, but I don't know how to integrate it with my Eclipse, besides using the traditionnal JPDA bridge. However, it seems to be more production-oriented than dev-oriented, and as such may require heavy configuration to fit my need (although the reading of its user manual doesn't revedal that). Have you got any ideas ? Some maven/eclipse plugins ?

    Read the article

  • Can't override "From" address in MailMessage class using .config login credentials

    - by Jeff
    I'm updating some existing code that sends a simple email using .Net's SMTP classes. Sample code is below. The SMTP host is google and login info is contained in the App.config as shown below (obviously not real login info :)). The problem I'm having, and I haven't been able to find any answers Googling, is that I can NOT override the display of the "from" email address that's contained in the "username" attribute off the Network element in the config in the delivered email. In the line below that explicitly sets the From property off the myMailMessage object, that value, "[email protected]" does NOT display when the email is received. It still shows as "[email protected]" from the Network tag. However, the From name "Sparky" does appear in the email. I've tried adding a custom "From" header to the Header property of the myMailMessage but that didn't work either. Is there anyway to login to the smtp server, as shown below using the Network tag credentials, but in the actual email received override the From email address that's displayed? Sample code: MailMessage myMailMessage = new MailMessage(); myMailMessage.Subject = "My New Mail"; myMailMessage.Body = "This is my test mail to check"; myMailMessage.From = new MailAddress("[email protected]", "Sparky"); myMailMessage.To.Add(new MailAddress("[email protected]", "receiver name")); SmtpClient mySmtpClient = new SmtpClient(); mySmtpClient.Send(myMailMessage); in App.config: <system.net> <mailSettings> <smtp deliveryMethod="Network" from="[email protected]"> <network host="smtp.gmail.com" port="587" userName="[email protected]" password="mypassword" defaultCredentials="false"/> </smtp> </mailSettings> </system.net>

    Read the article

  • How to call Twiter's Streaming/Filter Feed with urllib2/httplib?

    - by Simon
    Update: I switched this back from answered as I tried the solution posed in cogent Nick's answer and switched to Google's urlfetch: logging.debug("starting urlfetch for http://%s%s" % (self.host, self.url)) result = urlfetch.fetch("http://%s%s" % (self.host, self.url), payload=self.body, method="POST", headers=self.headers, allow_truncated=True, deadline=5) logging.debug("finished urlfetch") but unfortunately finished urlfetch is never printed - I see the timeout happen in the logs (it returns 200 after 5 seconds), but execution doesn't seem tor return. Hi All- I'm attempting to play around with Twitter's Streaming (aka firehose) API with Google App Engine (I'm aware this probably isn't a great long term play as you can't keep the connection perpetually open with GAE), but so far I haven't had any luck getting my program to actually parse the results returned by Twitter. Some code: logging.debug("firing up urllib2") req = urllib2.Request(url="http://%s%s" % (self.host, self.url), data=self.body, headers=self.headers) logging.debug("called urlopen for %s %s, about to call urlopen" % (self.host, self.url)) fobj = urllib2.urlopen(req) logging.debug("called urlopen") When this executes, unfortunately, my debug output never shows the called urlopen line printed. I suspect what's happening is that Twitter keeps the connection open and urllib2 doesn't return because the server doesn't terminate the connection. Wireshark shows the request being sent properly and a response returned with results. I tried adding Connection: close to my request header, but that didn't yield a successful result. Any ideas on how to get this to work? thanks -Simon

    Read the article

  • How do I sort an internationalized i18n table with symfony and doctrine?

    - by Maurizio
    I would like to display a list of records from an internationalized table using sfDoctrinePager. Not all the records have been translated to all the languages supported by the application, so I had to implement a fallback mechanism for some fields (by overriding the getFoo() function in the Bar.class.php, as explained in another post here). I have different fallback list for each culture. Everything works fine until when it comes to sorting the records in alphabetical order. I'm sorting the records at the SQL (Dql) level, by adding an -orderBy('t.name') to the query: $q = Doctrine::getTable('Foo') ->createQuery('f') ->leftJoin('f.Translation t') ->orderBy('t.name') But here come the troubles: the list gets not sorted correctly, regardless of the active culture. I get rather better results when I limit the translations to the active culture, like this: ->leftJoin('f.Translation t WITH lang = ?', $request->getParameter('sf_culture'); Then the sorting is correct, as far as all the translations exist for the active culture. If a translation does not exist and I have to take the name from the fallback language, the record will be displayed at the very beginning of the list (I understand this happens because the value for the current culture is null). My question is: is there a best practice for getting internationalized fields (needing fallbacks) sorted correctly with doctrine and sfDoctrinePager? Thank you in advance.

    Read the article

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • Combo box in a scrollable panel causing problems

    - by Dennis
    I have a panel with AutoScroll set to true. In it, I am programmatically adding ComboBox controls. If I add enough controls to exceed the viewable size of the panel a scroll bar appears (so far so good). However, if I open one of the combo boxes near the bottom of the viewable area the combo list isn't properly displayed and the scrollable area seems to be expanded. This results in all of the controls being "pulled" to the new bottom of the panel with some new blank space at the top. If I continue to tap on the drop down at the bottom of the panel the scrollable area will continue to expand indefinitely. I'm anchoring the controls to the left, right and top so I don't think anchoring is involved. Is there something obvious that could be causing this? Update: It looks like the problem lies with anchoring the controls to the right. If I don't anchor to the right then I don't get the strange behavior. However, without right anchoring the control gets cut off by the scroll bar. Here's a simplified test case I built that shows the issue: public Form1() { InitializeComponent(); Panel panel = new Panel(); panel.Size = new Size(80, 200); panel.AutoScroll = true; for (int i = 0; i < 10; ++i) { ComboBox cb = new ComboBox(); cb.Anchor = AnchorStyles.Left | AnchorStyles.Right | AnchorStyles.Top; cb.Items.Add("Option 1"); cb.Items.Add("Option 2"); cb.Items.Add("Option 3"); cb.Items.Add("Option 4"); cb.Location = new Point(0, i * 24); panel.Controls.Add(cb); } Controls.Add(panel); } If you scroll the bottom of the panel and tap on the combo boxes near the bottom you'll notice the strange behavior.

    Read the article

  • NSMutableObject from existing custom class

    - by A.S.
    Hello there. I have an existing class that has methods to deserialise from XML in my code. Now I need to create correct CoreData model from that class. It's objects will be created not only from CoreData storage but also by deserializing XML (somehow like instance->title = [[NSString stringWithUTF8String: (const char *)subNode->children->content] retain; ) without saving to CoreData, and sometimes I need to save it. What is the correct steps to modify existing class to do that except of adding CoreData framework and making my class an NSManagedObject instead of NSObject? Class sample: @interface TSTSong : NSManagedObject<NTSerializableObject> { NSString *identifier; NSString *title; float length; NSURL *previewURL; NSString *author; NSURL *coverURL; NSString *appStoreId; BOOL isPurchased; NSURL *bannerURL; NSDecimalNumber *priceValue; NSLocale *priceLocale; } P.S. I'm noob, so I'f I'm doing smth. wrong - please let me know. Sorry for my english.

    Read the article

  • Publishing artifacts with sources on archiva

    - by Palimondo
    At work I'm dipping my toes in managing project dependencies with maven. We use Apache Archiva (1.2.1) as a local repository and proxy. I'm adding artifact for open source project, that is not published on any public repository. I've learned that to publish the sources I should use the Classifier field on Upload artifact page. The sources are then listed alongside the jar and pom when I browse the repository. But when I update my maven dependencies I get only the jar and pom from the repository. I noticed that sources are also missing when the archiva proxies for me the downloads from other public repositories. I didn't find any configuration options in Archiva's admin pages to serve the sources... What am I missing? Update: I was missing the fact that artifact sources have to be downloaded manually. I.e. the maven client has to request them, which is controlled by command line option -DdownloadSources=true. Maven Integration for Eclipse has a preference setting to always download them as described in Resolving artifact sources. Archiva then serves the sources for local artifacts or proxies the request to remote repositories and caches the sources for future requests.

    Read the article

  • Call WCF Service Through Javascript, AJAX, or JQuery

    - by obautista
    I created a number of standard WCF Services (Service Contract and Host (svc) are in separate assemblies). I fired up a Web Site in IIS to host the Services (i.e., address is http://services:1000/wcfservices.svc). Then in my Web Site project I added the reference. I am able to call the services normally. I am needed to call some of the services client side. Not sure if I should be looking at articles calling WCF services through AJAX, JQuery, or JSON enabled WCF Services. Can anyone provide any thoughts or experience with configuring as such? Some of the changes I made was adding the following to the Operation Contract: [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "SetFoo")] void SetFoo(string Id); Then this above the implementation of the interface: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] Then in the service webconfig I have this (parens are angle brackets): <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <baseAddressPrefixFilters> <add prefix="http://services:1000/wcfservices.svc/"/>> </baseAddressPrefixFilters> </serviceHostingEnvironment> <serviceHostingEnvironment multipleSiteBindingsEnabled="false" /> Then in the client side I attempted this: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <compositeScript> <Scripts> <asp:ScriptReference Path="http://Flixsit:1000/FlixsitWebServices.svc" /> </Scripts> </CompositeScript> </asp:ScriptManagerProxy> I am attempting to call the service like this in javascript: wcfservices.SetFoo(string Id); Nothing is working. If it is idea or a better solution to call JSON enable, JQuery, etc....I am willing to make any changes. Thanks for any suggestions/tips provided....

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >