Search Results

Search found 14008 results on 561 pages for 'easy marks'.

Page 462/561 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Git from localhost to remotehost with a team of three

    - by Mark McDonnell
    Hi, I'm completely new to Git. I've only just worked out how to use Github in a basic way (e.g. push my local file changes to Github - so I've not done 'pulling' down of content from Github and 'merging' it into my localhost version or anything like that). I had a look over at this existing question - Git: localhost remote development remote production - but I think it may have been a bit advanced for me at this stage as I didn't quite understand the terminology that most of the people were using. What I would like to achieve is to have a local server set-up that my team of developers can all 'push' to/'pull' from etc. And then have that local server upload any updated files automatically to our web server so we could see the updates live in the browser. I'm happy to get a server set-up in the office running Mac OSX Server and then installing Git on it and then getting the devs to write a shell script to push to the remote server but only if it was fairly easy for the devs local git to push to this new local server. I'm not a network engineer so I don't know what would need to be set-up for that to work, I know obviously we could set-up the server to be accessible via a local ip address like 192.168.0.xxx but not sure how that works with pushing to a git repository on that server? Would that literally be something like doing this on my local machine: git remote add MyGitFile git://192.168.0.xxx/MyGitFile.git ? Any ideas or advice you can give to a total Git newbie trying to help his team get a better work flow. Kind regards, Mark

    Read the article

  • Storing header and data sections in a CSV file

    - by morpheous
    This should be relatively easy to do, but after several hours straight programming my mind seems a bit frazzled and could do with some help. I have a C++ class which I am currently using to store read/write data to file. I was initially using binary data, but have decided to store the data as CSV in order to let programs written in other languages be able to load the data. The C++ class looks a bit like this: class BinaryData { public: BinaryData(); void serialize(std::ostream& output) const; void deserialize(std::istream& input); private: Header m_hdr; std::vector<Row> m_rows; }; I am simply rewriting the serialize/deserialize methods to write to a CSV file. I am not sure on the "best" way to store a header section and a "data" section in a "flat" CSV file though - any suggestions on the most sensible way to do this?

    Read the article

  • Synchronizing one or more databases with a master database - Foreign keys

    - by Ikke
    I'm using Google Gears to be able to use an application offline (I know Gears is deprecated). The problem I am facing is the synchronization with the database on the server. The specific problem is the primary keys or more exactly, the foreign keys. When sending the information to the server, I could easily ignore the primary keys, and generate new ones. But then how would I know what the relations are. I had one sollution in mind, bet the I would need to save all the pk for every client. What is the best way to synchronize multiple client with one server db. Edit: I've been thinking about it, and I guess seqential primary keys are not the best solution, but what other possibilities are there? Time based doesn't seem right because of collisions which could happen. A GUID comes to mind, is that an option? It looks like generating a GUID in javascript is not that easy. I can do something with natural keys or composite keys. As I'm thinking about it, that looks like the best solution. Can I expect any problems with that?

    Read the article

  • Managing inverse relationships without CoreData

    - by Nathaniel Martin
    This is a question for Objective-J/Cappuccino, but I added the cocoa tag since the frameworks are so similar. One of the downsides of Cappuccino is that CoreData hasn't been ported over yet, so you have to make all your model objects manually. In CoreData, your inverse relationships get managed automatically for you... if you add an object to a to-many relationship in another object, you can traverse the graph in both directions. Without CoreData, is there any clean way to setup those inverse relationships automatically? For a more concrete example, let's take the typical Department and Employees example. To use rails terminology, a Department object has-many Employees, and an Employee belongs-to a Department. So our Department model has an NSMutableSet (or CPMutableSet ) "employees" that contains a set of Employees, and our Employee model has a variable "department" that points back to the Department model that owns it. Is there an easy way to make it so that, when I add a new Employee model into the set, the inverse relationship (employee.department) automatically gets set? Or the reverse: If I set the department model of an employee, then it automatically gets added to that department's employee set? Right know I'm making an object, "ValidatedModel" that all my models subclass, which adds a few methods that setup the inverse relationships, using KVO. But I'm afraid that I'm doing a lot of pointless work, and that there's already an easier way to do this. Can someone put my concerns to rest?

    Read the article

  • Way to store a large dictionary with low memory footprint + fast lookups (on Android)

    - by BobbyJim
    I'm developing an android word game app that needs a large (~250,000 word dictionary) available. I need: reasonably fast look ups e.g. constant time preferable, need to do maybe 200 lookups a second on occasion to solve a word puzzle and maybe 20 lookups within 0.2 second more often to check words the user just spelled. EDIT: Lookups are typically asking "Is in the dictionary?". I'd like to support up to two wildcards in the word as well, but this is easy enough by just generating all possible letters the wildcards could have been and checking the generated words (i.e. 26 * 26 lookups for a word with two wildcards). as it's a mobile app, using as little memory as possible and requiring only a small initial download for the dictionary data is top priority. My first naive attempts used Java's HashMap class, which caused an out of memory exception. I've looked into using the SQL lite databases available on android, but this seems like overkill. What's a good way to do what I need?

    Read the article

  • Modifying C# dictionary value

    - by minjang
    I'm a C++ expert, but not at all for C#. I created a Dictionary<string, STATS>, where STATS is a simple struct. Once I built the dictionary with initial string and STATS pairs, I want to modify the dictionary's STATS value. In C++, it's very clear: Dictionary<string, STATS*> benchmarks; Initialize it... STATS* stats = benchmarks[item.Key]; // Touch stats directly However, I tried like this in C#: Dictionary<string, STATS> benchmarks = new Dictionary<string, STATS>(); // Initialize benchmarks with a bunch of STATS foreach (var item in _data) benchmarks.Add(item.app_name, item); foreach (KeyValuePair<string, STATS> item in benchmarks) { // I want to modify STATS value inside of benchmarks dictionary. STATS stat_item = benchmarks[item.Key]; ParseOutputFile("foo", ref stat_item); // But, not modified in benchmarks... stat_item is just a copy. } This is a really novice problem, but wasn't easy to find an answer. EDIT: I also tried like the following: STATS stat_item = benchmarks[item.Key]; ParseOutputFile(file_name, ref stat_item); benchmarks[item.Key] = stat_item; However, I got the exception since such action invalidates Dictionary: Unhandled Exception: System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Enumerator.MoveNext() at helper.Program.Main(String[] args) in D:\dev\\helper\Program.cs:line 75

    Read the article

  • convert old repository to mercurial

    - by nedlud
    I've been playing around with different versioning systems to find one I'm comfortable with. I started with SVN (lets call this version of the project "f1"), then changed over to GIT. But I didn't know how to convert the old SVN repo to GIT, so I just copied the folder, deleted the .svn stuff, and turned it into a GIT repo (lets call this copied version "f2"). Now I'm playing around with Mercurial and was very pleased to find that it has a Tortoise client for Windows. I was also please to find how easy it was to convert the GIT repo into Mercurial, so I preserved the history (I still cloned it first, just in case. So I'm calling this hg version "f3"). But now what I'm wondering is: what do I do with the old SVN repo that still holds my history from before I played with GIT? I guess I can convert the old SVN repo to Mercurial, but can I then merge those two histories into the one repository so I have a complete set of histories in one place? In other words, can I prepend f1 to f3?

    Read the article

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • setup Qt and PyQt on mac osx so my app can also deployable on windows

    - by hk_programmer
    Hi, I've been coding with Python and C++ and now need to work on building a gui for data visualization purposes. I work on Mac Snow Leopard (intel), python 3.1 using gcc 4.2.1 (from Xcode 3.1) I wanted to first install Qt and then PyQt. And my goals are to be able to: - quickly prototype GUI and the accompanied logic that drives the GUI using PyQt and python - if I decided I need the speed, or if it's fairly easy to translate my GUI into C++ using the Qt tools, I have the options to translate my app into C++ - Be able to deploy my application onto Windows (both the python and c++ version of my app) Give the goals above, what are the correct steps I should take and what issues i should be aware of when setting up Qt and PyQt. Which other deployment tools do I need? From my readings so far, here's what I have: download the Qt source for mac and configure it with -platform macx-g++42 -arch x86_64 -no-framework (i've read somewhere that building as framework causes some trouble in deployment and/or debugging, can't find the article anymore) download latest SIP source and build download latest PyQt and build from source (any special options I should pay attention to?) For deployment, I've read that I would need to use py2exe/cx_freeze for windows, p2app for mac: http://arstechnica.com/open-source/guides/2009/03/how-to-deploying-pyqt-applications-on-windows-and-mac-os-x.ars but seems like what the article describe is deploying an app you build on windows on the windows platform and vice versa. How do you deploy to windows (is it even possible?) if you are writing your Qt app on a mac ? Really appreciate the help

    Read the article

  • What is the benefit of using ONLY OpenID authentication on a site?

    - by Peter
    From my experience with OpenID, I see a number of significant downsides: Adds a Single Point of Failure to the site It is not a failure that can be fixed by the site even if detected. If the OpenID provider is down for three days, what recourse does the site have to allow its users to login and access the information they own? Takes a user to another sites content and every time they logon to your site Even if the OpenID provider does not have an error, the user is re-directed to their site to login. The login page has content and links. So there is a chance a user will actually be drawn away from the site to go down the Internet rabbit hole. Why would I want to send my users to another company's website? [ Note: my provider no longer does this and seems to have fixed this problem (for now).] Adds a non-trivial amount of time to the signup To sign up with the site a new user is forced to read a new standard, chose a provider, and signup. Standards are something that the technical people should agree to in order to make a user experience frictionless. They are not something that should be thrust on the users. It is a Phisher's Dream OpenID is incredibly insecure and stealing the person's ID as they log in is trivially easy. [ taken from David Arno's Answer below ] For all of the downside, the one upside is to allow users to have fewer logins on the Internet. If a site has opt-in for OpenID then users who want that feature can use it. What I would like to understand is: What benefit does a site get for making OpenID mandatory?

    Read the article

  • Unit testing an MVC action method with a Cache dependency?

    - by Steve
    I’m relatively new to testing and MVC and came across a sticking point today. I’m attempting to test an action method that has a dependency on HttpContext.Current.Cache and wanted to know the best practice for achieving the “low coupling” to allow for easy testing. Here's what I've got so far... public class CacheHandler : ICacheHandler { public IList<Section3ListItem> StateList { get { return (List<Section3ListItem>)HttpContext.Current.Cache["StateList"]; } set { HttpContext.Current.Cache["StateList"] = value; } } ... I then access it like such... I'm using Castle for my IoC. public class ProfileController : ControllerBase { private readonly ISection3Repository _repository; private readonly ICacheHandler _cache; public ProfileController(ISection3Repository repository, ICacheHandler cacheHandler) { _repository = repository; _cache = cacheHandler; } [UserIdFilter] public ActionResult PersonalInfo(Guid userId) { if (_cache.StateList == null) _cache.StateList = _repository.GetLookupValues((int)ELookupKey.States).ToList(); ... Then in my unit tests I am able to mock up ICacheHandler. Would this be considered a 'best practice' and does anyone have any suggestions for other approaches? Thanks in advance. Cheers

    Read the article

  • ASP.Net MVC - how can I easily serialize query results to a database?

    - by Mortanis
    I've been working on a little property search engine while I learn ASP.Net MVC. I've gotten the results from various property database tables and sorted them into a master generic property response. The search form is passed via Model Binding and works great. Now, I'd like to add pagination. I'm returning the chunk of properties for the current page with .Skip() and .Take(), and that's working great. I have a SearchResults model that has the paged result set and various other data like nextPage and prevPage. Except, I no longer have the original form of course to pass to /Results/2. Previously I'd have just hidden a copy of the form and done a POST each time, but it seems inelegant. I'd like to serialize the results to my MS SQL database and return a unique key for that results set - this also helps with a "Send this query to a friend!" link. Killing two birds with one stone. Is there an easy way to take an IQueryable result set that I have, serialize it, stick it into the DB, return a unique key and then reverse the process with said key? I'm using Linq to SQL currently on a MS SQL Express install, though in production it'll be on MS SQL 2008.

    Read the article

  • In SQL can I return a tables with a varying number of columns

    - by Matt
    I have a somewhat more complicated scenario, but I think it should be possible. I have a large SPROC whose result is a set of characteristics for a set of persons. So the Table would look something like this: Property | Client1 Client 2 Client3 ----------------------------------------------------------- Sex | M F M Age | 67 56 67 Income | Low Mid Low It's built using cursors, iterating over different datasets. The problem I am facing is that there is a varying number of Clients and Properties, so an equally valid result over different input sets might be: Property | Client1 Client 2 ------------------------------------------- Sex | M F Age | 67 56 Weight | 122 122 The different number of properties is easy, those are just extra rows. My problem is that I need to declare a temporary table with a varying number of columns. There could be 2 clients or 100. Every client in guaranteed to have every property ultimately listed. What SQL structure would statisfy this and how can I declare it and insert things into it? I can't just flip the columns and rows either because there is a variable number of each.

    Read the article

  • Design Philosophy Question - When to create new functions

    - by Eclyps19
    This is a general design question not relating to any language. I'm a bit torn between going for minimum code or optimum organization. I'll use my current project as an example. I have a bunch of tabs on a form that perform different functions. Lets say Tab 1 reads in a file with a specific layout, tab 2 exports a file to a specific location, etc. The problem I'm running into now is that I need these tabs to do something slightly different based on the contents of a variable. If it contains a 1 I may need to use Layout A and perform some extra concatenation, if it contains a 2 I may need to use Layout B and do no concatenation but add two integer fields, etc. There could be 10+ codes that I will be looking at. Is it more preferable to create an individual path for each code early on, or attempt to create a single path that branches out only when absolutely required. Creating an individual path for each code would allow my code to be extremely easy to follow at a glance, which in turn will help me out later on down the road when debugging or making changes. The downside to this is that I will increase the amount of code written by calling some of the same functions in multiple places (for example, steps 3, 5, and 9 for every single code may be exactly the same. Creating a single path that would branch out only when required will be a bit messier and more difficult to follow at a glance, but I would create less code by placing conditionals only at steps that are unique. I realize that this may be a case-by-case decision, but in general, if you were handed a previously built program to work on, which would you prefer?

    Read the article

  • How to use MySQL geospatial extensions with spherical geometries

    - by Joshua
    Hi Everyone, I would like to store thousands of latitude/longitude points in a MySQL db. I was successful at setting up the tables and adding the data using the geospatial extensions where the column 'coord' is a Point(lat, lng). Problem: I want to quickly find the 'N' closest entries to latitude 'X' degrees and longitude 'Y' degrees. Since the Distance() function has not yet been implemented, I used GLength() function to calculate the distance between (X,Y) and each of the entries, sorting by ascending distance, and limiting to 'N' results. The problem is that this is not calculating shortest distance with spherical geometry. Which means if Y = 179.9 degrees, the list of closest entries will only include longitudes of starting at 179.9 and decreasing even though closer entries exist with longitudes increasing from -179.9. How does one typically handle the discontinuity in longitude when working with spherical geometries in databases? There has to be an easy solution to this, but I must just be searching for the wrong thing because I have not found anything helpful. Should I just forget the GLength() function and create my own function for calculating angular separation? If I do this, will it still be fast and take advantage of the geospatial extensions? Thanks! josh UPDATE: This is exactly what I am describing above. However, it is only for SQL Server. Apparently SQL Server has a Geometry and Geography datatypes. The geography does exactly what I need. Is there something similar in MySQL?

    Read the article

  • XSLT: insert parameter value inside of an html attribute

    - by usr
    How to make the following code insert the youtubeId parameter: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#x00A0;"> ]> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxml="urn:schemas-microsoft-com:xslt" xmlns:YouTube="urn:YouTube" xmlns:umbraco.library="urn:umbraco.library" exclude-result-prefixes="msxml umbraco.library YouTube"> <xsl:output method="xml" omit-xml-declaration="yes"/> <xsl:param name="videoId"/> <xsl:template match="/"> <a href="{$videoId}">{$videoId}</a> <object width="425" height="355"> <param name="movie" value="http://www.youtube.com/v/{$videoId}&amp;hl=en"></param> <param name="wmode" value="transparent"></param> <embed src="http://www.youtube.com/v/{$videoId}&amp;hl=en" type="application/x-shockwave-flash" wmode="transparent" width="425" height="355"></embed> </object>$videoId {$videoId} {$videoId} <xsl:value-of select="/macro/videoId" /> </xsl:template> </xsl:stylesheet> As you can see I have experimented quite a bit. <xsl:value-of select="/macro/videoId" /> actually outputs the videoId but all other occurences do not. This must be an easy question to answer but I just cannot get it to work.

    Read the article

  • Replace without the replace function

    - by Molly Potter
    Assignment: Let X and Y be two words. Find/Replace is a common word processing operation that finds each occurrence of word X and replaces it with word Y in a given document. Your task is to write a program that performs the Find/Replace operation. Your program will prompt the user for the word to be replaced (X), then the substitute word (Y ). Assume that the input document is named input.txt. You must write the result of this Find/Replace operation to a file named output.txt. Lastly, you cannot use the replace() string function built into Python (it would make the assignment much too easy). To test your code, you should modify input.txt using a text editor such as Notepad or IDLE to contain different lines of text. Again, the output of your code must look exactly like the sample output. This is my code: input_data = open('input.txt','r') #this opens the file to read it. output_data = open('output.txt','w') #this opens a file to write to. userStr= (raw_input('Enter the word to be replaced:')) #this prompts the user for a word userReplace =(raw_input('What should I replace all occurences of ' + userStr + ' with?')) #this prompts the user for the replacement word for line in input_data: words = line.split() if userStr in words: output_data.write(line + userReplace) else: output_data.write(line) print 'All occurences of '+userStr+' in input.txt have been replaced by '+userReplace+' in output.txt' #this tells the user that we have replaced the words they gave us input_data.close() #this closes the documents we opened before output_data.close() It won't replace anything in the output file. Help!

    Read the article

  • Analyzing an IronPython Scope

    - by Vercinegetorix
    I'm trying to write C# code with an embedded IronPython script. Then want to analyze the contents of the script, i.e. list all variables, functions, class and their members/methods. There's an easy way to start, assuming I've got a scope defined and code executed in it already: dynamic variables=pyScope.GetVariables(); foreach (string v in variables) { dynamic dynamicV=pyScope.GetVariable(); /*seems to return everything. variables, functions, classes, instances of classes*/ } But how do I figure out what the type of a variable is? For the following python 'objects', dynamicV.GetType() will return different values: x=5 --system.Int32 y="asdf" --system.String def func():... --IronPython.Runtime.PythonFunction z=class() -- IronPython.Runtime.Types.OldInstance, how can I identify what the actual python class is? class NewClass -- throws an error, GetType() is unavailable. This is almost what I'm looking for. I could capture the exception thrown when unavailable and assume it's a class declaration, but that seems unclean. Is there a better approach? To discover the members/methods of a class it looks like I can use: ObjectOperations op = pyEngine.Operations; object instance = op.Call("className"); foreach (string j in op.GetMemberNames("className")) { object member=op.GetMember(instance, j); Console.WriteLine(member.GetType()); /*once again, GetType() provides some info about the type of the member, but returns null sometimes*/ } Also, how do I get the parameters to a method? Thanks!

    Read the article

  • Why is Request not available for my PDF request?

    - by Beska
    We're trying to create a .NET aspx page that will have a PDF within it. Doing this by hardcoding it is easy. <object height="1250px" width="100%" type="application/pdf" data="our.pdf"> <param value="our.pdf" name="src" /> <param value="transparent" name="wmode" /> </object> (don't worry too much about the transparent thing...we're doing that for other reasons...but I include it here "just in case".) The problem is when we want to generate the PDF dynamically. Our code to populate the literal on the front end looks like this: ltrPDF.Text = String.Format("<object height=\"1250px\" width=\"100%\" type=\"application/pdf\" data=\"ourPdfGenerator.aspx?var0={0}&var1={1}&var2={2}\">", var0, var1, var2); ltrPDF.Text += String.Format("<param value=\"ourPdfGenerator.aspx?var0={0}&var1={1}&var2={2}\">", var0, var1, var2); ltrPDF.Text += "<param value=\"transparent\" name=\"wmode\"/>"; ltrPDF.Text += "</object>"; Kind of ugly, but it seems like it should work. But it doesn't. When I debug, and put a breakpoint on the first line of ourPdfGenerator.aspx.cs Page_Load method, I reach the breakpoint without any difficulty. However, the first thing we do is try to use Request.QueryString: string var0 = Request.QueryString["var0"]; which immediately throws an HttpException: "Request is not available in this context." I'm not clear on: Why isn't it available? What can I do about it?

    Read the article

  • Titanium TableViewRow classname with custom rows

    - by pancake
    I would like to know in what way the 'className' property of a Ti.UI.TableViewRow helps when creating custom rows. For example, I populate a tableview with custom rows in the following way: function populateTableView(tableView, data) { var rows = []; var row; var title, image; var i; for (i = 0; i < data.length; i++) { title = Ti.UI.createLabel({ text : data[i].title, width : 100, height: 30, top: 5, left: 25 }); image = Ti.UI.createImage({ image : 'some_image.png', width: 30, height: 30, top: 5, left: 5 }); /* and, like, 5+ more views or whatever */ row = Ti.UI.createTableViewRow(); row.add(titleLabel); row.add(image); rows.push(row); } tableView.setData(rows); } Of course, this example of a "custom" row is easily created using the standard title and image properties of the TableViewRow, but that isn't the point. How is the allocation of new labels, image views and other child views of a table view prevented in favour of their reuse? I know in iOS this is achieved by using the method -[UITableView dequeueReusableCellWithIdentifier:] to fetch a row object from a 'reservoir' (so 'className' is 'identifier' here) that isn't currently being used for displaying data, but already has the needed child views laid out correctly in it, thus only requiring to update the data contained within (text, image data, etc). As this system is so unbelievably simple, I have a lot of trouble believing the method employed by the Titanium API does not support this. After reading through the API and searching the web, I do however suspect this is the case. The 'className' property is recommended as an easy way to make table views more efficient in Titanium, but its relation to custom table view rows is not explained in any way. If anyone could clarify this matter for me, I would be very grateful.

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • Refreshing Read-Only (Chained) Property in MVVM

    - by Wonko the Sane
    I'm thinking this should be easy, but I can't seem to figure this out. Take these properties from an example ViewModel (ObservableViewModel implements INotifyPropertyChanged): class NameViewModel : ObservableViewModel { Boolean mShowFullName = false; string mFirstName = "Wonko"; string mLastName = "DeSane"; private readonly DelegateCommand mToggleName; public NameViewModel() { mToggleName = new DelegateCommand(() => ShowFullName = !mShowFullName); } public ICommand ToggleNameCommand { get { return mToggleName; } } public Boolean ShowFullName { get { return mShowFullName; } set { SetPropertyValue("ShowFullName", ref mShowFullName, value); } } public string Name { get { return (mShowFullName ? this.FullName : this.Initials); } } public string FullName { get { return mFirstName + " " + mLastName; } } public string Initials { get { return mFirstName.Substring(0, 1) + "." + mLastName.Substring(0, 1) + "."; } } } The guts of such a [insert your adjective here] View using this ViewModel might look like: <TextBlock x:Name="txtName" Grid.Row="0" Text="{Binding Name}" /> <Button x:Name="btnToggleName" Command="{Binding ToggleNameCommand}" Content="Toggle Name" Grid.Row="1" /> The problem I am seeing is when the ToggleNameCommand is fired. The ShowFullName property is properly updated by the command, but the Name binding is never updated in the View. What am I missing? How can I force the binding to update? Do I need to implement the Name properties as DependencyProperties (and therefore derive from DependencyObject)? Seems a little heavyweight to me, and I'm hoping for a simpler solution. Thanks, wTs

    Read the article

  • Creating a Linq->HQL provider

    - by Mike Q
    Hi all, I have a client application that connects to a server. The server uses hibernate for persistence and querying so it has a set of annotated hibernate objects for persistence. The client sends HQL queries to the server and gets responses back. The client has an auto-generated set of objects that match the server hibernate objects for query results and basic persistence. I would like to support using Linq to query as well as Hql as it makes the queries typesafe and quicker to build (no more typos in HQL string queries). I've looked around at the following but I can't see how to get them to fit with what I have. NHibernate's Linq provider - requires using NHibernate ISession and ISessionFactory, which I don't have LinqExtender - requires a lot of annotations on the objects and extending a base type, too invasive What I really want is something that will generate give me a nice easy to process structure to build the HQL queries from. I've read most of a 15 page article written by one of the C# developers on how to create custom providers and it's pretty fraught, mainly because of the complexity of the expression tree. Can anyone suggest an approach for implementing Linq - HQL translation? Perhaps a library that will the cleanup of the expression tree into something more SQL/HQLish. I would like to support select/from/where/group by/order by/joins. Not too worried about subqueries.

    Read the article

  • Combinationally unique MySQL tables

    - by Jack Webb-Heller
    So, here's the problem (it's probably an easy one :P) This is my table structure: CREATE TABLE `users_awards` ( `user_id` int(11) NOT NULL, `award_id` int(11) NOT NULL, `duplicate` int(11) NOT NULL DEFAULT '0', UNIQUE KEY `award_id` (`award_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 So it's for a user awards system. I don't want my users to be granted the same award multiple times, which is why I have a 'duplicate' field. The query I'm trying is this (with sample data of 3 and 2) : INSERT INTO users_awards (user_id, award_id) VALUES ('3','2') ON DUPLICATE KEY UPDATE duplicate=duplicate+1 So my MySQL is a little rusty, but I set user_id to be a primary key, and award_id to be a UNIQUE key. This (kind of) created the desired effect. When user 1 was given award 2, it entered. If he/she got this twice, only one row would be in the table, and duplicate would be set to 1. And again, 2, etc. When user 2 was given award 1, it entered. If he/she got this twice, duplicate updated, etc. etc. But when user 1 is given award 1 (after user 2 has already been awarded it), user 2 (with award 1)'s duplicate field increases and nothing is added to user 1. Sorry if that's a little n00bish. Really appreciate the help! Jack

    Read the article

  • To OpenID or not to OpenID? Is it worth it?

    - by Eloff
    Does OpenID improve the user experience? Edit Not to detract from the other comments, but I got one really good reply below that outlined 3 advantages of OpenID in a rational bottom line kind of way. I've also heard some whisperings in other comments that you can get access to some details on the user through OpenID (name? email? what?) and that using that it might even be able to simplify the registration process by not needing to gather as much information. Things that definitely need to be gathered in a checkout process: Full name Email (I'm pretty sure I'll have to ask for these myself) Billing address Shipping address Credit card info There may be a few other things that are interesting from a marketing point of view, but I wouldn't ask the user to manually enter anything not absolutely required during the checkout process. So what's possible in this regard? /Edit (You may have noticed stackoverflow uses OpenID) It seems to me it is easier and faster for the user to simply enter a username and password in a signup form they have to go through anyway. I mean you don't avoid entering a username and password either with OpenID. But you avoid the confusion of choosing a OpenID provider, and the trip out to and back from and external site. With Microsoft making Live ID an OpenID provider (More Info), bringing on several hundred million additional accounts to those provided by Google, Yahoo, and others, this question is more important than ever. I have to require new customers to sign up during the checkout process, and it is absolutely critical that the experience be as easy and smooth as possible, every little bit harder it becomes translates into lost sales. No geek factor outweighs cold hard cash at the end of the day :) OpenID seems like a nice idea, but the implementation is of questionable value. What are the advantages of OpenID and is it really worth it in my scenario described above?

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >