Search Results

Search found 10046 results on 402 pages for 'repository pattern'.

Page 357/402 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Attributes in XML subtree that belong to the parent

    - by Bart van Heukelom
    Say I have this XML <doc:document> <objects> <circle radius="10" doc:colour="red" /> <circle radius="20" doc:colour="blue" /> </objects> </doc:document> And this is how it is parsed (pseudo code): // class DocumentParser public Document parse(Element edoc) { doc = new Document(); doc.objects = ObjectsParser.parse(edoc.getChild("objects")); for ( ...?... ) { doc.objectColours.put(object, colour); } return doc; } ObjectsParser is responsible for parsing the objects bit, but is not and should not be aware of the existence of documents. However, in Document colours are associated with objects by use of a Map. What kind of pattern would you recommend to give the colour settings back to DocumentParser.parse from ObjectsParser.parse so it can associate it with the objects they belong to in a map? The alternative would be something like this: <doc:document> <objects> <circle id="1938" radius="10" /> <circle id="6398" radius="20" /> </objects> <doc:objectViewSettings> <doc:objectViewSetting object="1938" colour="red" /> <doc:objectViewSetting object="6398" colour="blue" /> </doc:objectViewSettings> </doc:document> Ugly!

    Read the article

  • Returning HTML in the JS portion of a respond_to block throws errors in IE

    - by Horace Loeb
    Here's a common pattern in my controller actions: respond_to do |format| format.html {} format.js { render :layout => false } end I.e., if the request is non-AJAX, I'll send the HTML content in a layout on a brand new page. If the request is AJAX, I'll send down the same content, but without a layout (so that it can be inserted into the existing page or put into a lightbox or whatever). So I'm always returning HTML in the format.js portion, yet Rails sets the Content-Type response header to text/javascript. This causes IE to throw this fun little error message: Of course I could set the content-type of the response every time I did this (or use an after_filter or whatever), but it seems like I'm trying to do something relatively standard and I don't want to add additional boilerplate code. How do I fix this problem? Alternatively, if the only way to fix the problem is to change the content-type of the response, what's the best way to achieve the behavior I want (i.e., sending down content with layout for non-AJAX and the same content without a layout for AJAX) without having to deal with these errors? Edit: This blog post has some more info

    Read the article

  • Spring Security 3.0- Customise basic http Authentication Dialog

    - by gav
    Rather than reading; A user name and password are being requested by http://localhost:8080. The site says: "Spring Security Application" I want to change the prompt, or at least change what the "site says". Does anyone know how to do this via resources.xml? In my Grails App Spring configuration, my current version is as follows; <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <http auto-config="true" use-expressions="true"> <http-basic/> <intercept-url pattern="/**" access="isAuthenticated()" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <user-service> <user name="admin" password="admin" authorities="ROLE_ADMIN"/> </user-service> </authentication-provider> </authentication-manager> </beans:beans>

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • Need help with strange Class#getResource() issue

    - by Andreas_D
    I have some legacy code that reads a configuration file from an existing jar, like: URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = url.openStream(); Obviously it worked before but when I execute this code, the URL is valid but I get an IOException on the third line, saying it can't find the file. The url is something like "file:jar:c:/path/to/jar/somejar.jar!configuration.properties" so it doesn't look like a classpath issue - java knows pretty well where the file can be found.. The above code is part of an ant task and it fails while the task is executed. Strange enough - I copied the code and the jar file into a separate class and it works as expected, the properties file is readable. At some point I changed the code of the ant task to URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = SomeClass.class.getResourceAsStream("/configuration.properties"); and now it works - just until it crashes in another class where a similiar access pattern is implemented.. Why could it have worked before, why does it fail now? The only difference I see at the moment is, that the old build was done with java 1.4 while I'm trying it with Java 6 now. Workaround Today I installed Java 1.4.2_19 on the build server and made ant to use it. To my totally frustrating surprise: The problem is gone. It looks to me, that java 1.4.2 can handle URLs of this type while Java 1.6 can't (at least in my context/environment). I'm still hoping for an explanation although I'm facing the work to rewrite parts of the code to use Class#getRessourceAsStream which behaved much more stable...

    Read the article

  • At which line in the following code should I commit my unit of work?

    - by Pure.Krome
    I have the following code which is in a transaction. I'm not sure where/when I should be commiting my unit of work. On purpose, I've not mentioned what type of Respoistory i'm using - eg. Linq-To-Sql, Entity Framework 4, NHibernate, etc. If someone knows where, can they please explain WHY they have said, where? (i'm trying to understand the pattern through example(s), as opposed to just getting my code to work). Here's what i've got :- using ( TransactionScope transactionScope = new TransactionScope ( TransactionScopeOption.RequiresNew, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted } ) ) { _logEntryRepository.InsertOrUpdate(logEntry); //_unitOfWork.Commit(); // Here, commit #1 ? // Now, if this log entry was a NewConnection or an LostConnection, // then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert( new ConnectedClient { LogEntryId = logEntry.LogEntryId }); //_unitOfWork.Commit(); // Here, commit #2 ? } // A (PB) BanKick does _NOT_ register a lost connection, // so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete( logEntry.ClientName, logEntry.ClientIpAndPort); //_unitOfWork.Commit(); // Here, commit #3 ? } _unitOfWork.Commit(); // Here, commit #4 ? transactionScope.Complete(); }

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • dynamical binding or switch/case?

    - by kingkai
    A scene like this: I've different of objects do the similar operation as respective func() implements. There're 2 kinds of solution for func_manager() to call func() according to different objects Solution 1: Use virtual function character specified in c++. func_manager works differently accroding to different object point pass in. class Object{ virtual void func() = 0; } class Object_A : public Object{ void func() {}; } class Object_B : public Object{ void func() {}; } void func_manager(Object* a) { a->func(); } Solution 2: Use plain switch/case. func_manager works differently accroding to different type pass in typedef _type_t { TYPE_A, TYPE_B }type_t; void func_by_a() { // do as func() in Object_A } void func_by_b() { // do as func() in Object_A } void func_manager(type_t type) { switch(type){ case TYPE_A: func_by_a(); break; case TYPE_B: func_by_b(); default: break; } } My Question are 2: 1. at the view point of DESIGN PATTERN, which one is better? 2. at the view point of RUNTIME EFFCIENCE, which one is better? Especailly as the kinds of Object increases, may be up to 10-15 total, which one's overhead oversteps the other? I don't know how switch/case implements innerly, just a bunch of if/else? Thanks very much!

    Read the article

  • C++: best way to implement globally scoped data

    - by bobobobo
    I'd like to make program-wide data in a C++ program, without running into pesky LNK2005 errors when all the source files #includes this "global variable repository" file. I have 2 ways to do it in C++, and I'm asking which way is better. The easiest way to do it in C# is just public static members. C#: public static class DataContainer { public static Object data1 ; public static Object data2 ; } In C++ you can do the same thing C++ global data way#1: class DataContainer { public: static Object data1 ; static Object data2 ; } ; Object DataContainer::data1 ; Object DataContainer::data2 ; However there's also extern C++ global data way #2: class DataContainer { public: Object data1 ; Object data2 ; } ; extern DataContainer * dataContainer ; // instantiate in .cpp file In C++ which is better, or possibly another way which I haven't thought about? The solution has to not cause LNK2005 "object already defined" errors.

    Read the article

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

  • Should I return IEnumerable<T> or IQueryable<T> from my DAL?

    - by Gary '-'
    I know this could be opinion, but I'm looking for best practices. As I understand, IQueryable implements IEnumerable, so in my DAL, I currently have method signatures like the following: IEnumerable<Product> GetProducts(); IEnumerable<Product> GetProductsByCategory(int cateogoryId); Product GetProduct(int productId); Should I be using IQueryable here? What are the pros and cons of either approach? Note that I am planning on using the Repository pattern so I will have a class like so: public class ProductRepository { DBDataContext db = new DBDataContext(<!-- connection string -->); public IEnumerable<Product> GetProductsNew(int daysOld) { return db.GetProducts() .Where(p => p.AddedDateTime > DateTime.Now.AddDays(-daysOld )); } } Should I change my IEnumerable<T> to IQueryable<T>? What advantages/disadvantages are there to one or the other?

    Read the article

  • Git. Checkout feature branch between merge commits

    - by mageslayer
    Hi all It's kind weird, but I can't fulfill a pretty common operation with git. Basically what I want is to checkout a feature branch, not using it's head but using SHA id. This SHA points between merges from master branch. The problem is that all I get is just master branch without a commits from feature branch. Currently I'm trying to fix a regression introduced earlier in master branch. Just to be more descriptive, I crafted a small bash script to recreate a problem repository: #!/bin/bash rm -rf ./.git git init echo "test1" > test1.txt git add test1.txt git commit -m "test1" -a git checkout -b patches master echo "test2" > test2.txt git add test2.txt git commit -m "test2" -a git checkout master echo "test3" > test3.txt git add test3.txt git commit -m "test3" -a echo "test4" > test4.txt git add test4.txt git commit -m "test4" -a echo "test5" > test5.txt git add test5.txt git commit -m "test5" -a git checkout patches git merge master #Now how to get a branch having all commits from patches + test3.txt + test4.txt - test5.txt ??? Basically all I want is just to checkout branch "patches" with files 1-4, but not including test5.txt. Doing: git checkout [sha_where_test4.txt_entered] ... just gives a branch with test1,test3,test4, but excluding test2.txt Thanks.

    Read the article

  • Basic Team Foundation Server 2010 Question - System Resource Usage?

    - by user127954
    Guys / Gals i have a real basic Team Foundation Server 2010 question. For those of you who have played around with tfs 2010 is it a lot more light weight than tfs2008 is? I remember installing all the pieces needed for TFS 2008 one one machine at work. I remember it being a pain to install (i know 2010 is supposed to be much better) We wanted to play around with it a little bit to see if it met our needs. Well it brought that machine to a screeching halt. I'm needing a source control repository for home and i thought why not just install tfs 2010 so i can get familiar with it and maybe in the future i can make a better sell to my organization and FINALLY get them to move off of Source Safe but my concern is i only have one server at home (granted i already have SQL Server installed) and don't want to buy a machine just for this purpose. I'd also like to get more familiar with CI too. Anyways, if team is going to be to heavy i'll just use subversion but i'd like to use TFS if possible. Any help would be appreciated. thanks, Ncage

    Read the article

  • Should I use IDisposable for purely managed resources?

    - by John Gietzen
    Here is the scenario: I have an object called a Transaction that needs to make sure that only one entity has permission to edit it at any given time. In order to facilitate a long-lived lock, I have the class generating a token object that can be used to make the edits. You would use it like this: var transaction = new Transaction(); using (var tlock = transaction.Lock()) { transaction.Update(data, tlock); } Now, I want the TransactionLock class to implement IDisposable so that its usage can be clear. But, I don't have any unmanaged resources to dispose. however, the TransctionLock object itself is a sort of "unmanaged resource" in the sense that the CLR doesn't know how to properly finalize it. All of this would be fine and dandy, I would just use IDisposable and be done with it. However, my issue comes when I try to do this in the finalizer: ~TransactionLock() { this.Dispose(false); } I want the finalizer to release the transaction from the lock, if possible. How, in the finalizer, do I detect if the parent transaction (this.transaction) has already been finalized? Is there a better pattern I should be using? The Transaction class looks something like this: public sealed class Transaction { private readonly object lockMutex = new object(); private TransactionLock currentLock; public TransactionLock Lock() { lock (this.lockMutex) { if (this.currentLock != null) throw new InvalidOperationException(/* ... */); this.currentLock = new TransactionLock(this); return this.currentLock; } } public void Update(object data, TransactionLock tlock) { lock (this.lockMutex) { this.ValidateLock(tlock); // ... } } internal void ValidateLock(TransactionLock tlock) { if (this.currentLock == null) throw new InvalidOperationException(/* ... */); if (this.currentLock != tlock) throw new InvalidOperationException(/* ... */); } internal void Unlock(TransactionLock tlock) { lock (this.lockMutex) { this.ValidateLock(tlock); this.currentLock = null; } } }

    Read the article

  • Is there a way to increase the efficiency of shared_ptr by storing the reference count inside the co

    - by BillyONeal
    Hello everyone :) This is becoming a common pattern in my code, for when I need to manage an object that needs to be noncopyable because either A. it is "heavy" or B. it is an operating system resource, such as a critical section: class Resource; class Implementation : public boost::noncopyable { friend class Resource; HANDLE someData; Implementation(HANDLE input) : someData(input) {}; void SomeMethodThatActsOnHandle() { //Do stuff }; public: ~Implementation() { FreeHandle(someData) }; }; class Resource { boost::shared_ptr<Implementation> impl; public: Resource(int argA) explicit { HANDLE handle = SomeLegacyCApiThatMakesSomething(argA); if (handle == INVALID_HANDLE_VALUE) throw SomeTypeOfException(); impl.reset(new Implementation(handle)); }; void SomeMethodThatActsOnTheResource() { impl->SomeMethodThatActsOnTheHandle(); }; }; This way, shared_ptr takes care of the reference counting headaches, allowing Resource to be copyable, even though the underlying handle should only be closed once all references to it are destroyed. However, it seems like we could save the overhead of allocating shared_ptr's reference counts and such separately if we could move that data inside Implementation somehow, like boost's intrusive containers do. If this is making the premature optimization hackles nag some people, I actually agree that I don't need this for my current project. But I'm curious if it is possible.

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • If the dependency is a unique snapshot version and install is called, what does maven select?

    - by chrsk
    Imagine two projects. The first is the framework-core project which is in version 1.1.0 and has several snapshot builds. The other is the example-business project which has the following dependency to framework-core on the build-iteration number 9. <dependency> <groupId>org.example</groupId> <artifactId>framework-core</artifactId> <version>1.1.0-20100518.134928-9</version> </dependency> What happens if mvn install is called on the framework-core? I found out that the artifact is copied to the folder and is named to *.1.1.0-SNAPSHOT.jar (as expected). This lead me to the assumption that this version is only used if even this 1.1.0-SNAPSHOT version is defined as dependency and not the precise build. To test something local without deploying it to the maven repository: call mvn install, change the dependency to 1.1.0-SNAPSHOT -- and the artifact just installed is used? Or is it possible to overwrite the specific build (with using the install lifecycle phase)?

    Read the article

  • Sudoku Recursion Issue (Java)

    - by SkylineAddict
    I'm having an issue with creating a random Sudoku grid. I tried modifying a recursive pattern that I used to solve the puzzle. The puzzle itself is a two dimensional integer array. This is what I have (By the way, the method doesn't only randomize the first row. I had an idea to randomize the first row, then just decided to do the whole grid): public boolean randomizeFirstRow(int row, int col){ Random rGen = new Random(); if(row == 9){ return true; } else{ boolean res; for(int ndx = rGen.nextInt() + 1; ndx <= 9;){ //Input values into the boxes sGrid[row][col] = ndx; //Then test to see if the value is valid if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)){ // grid valid, move to the next cell if(col + 1 < 9){ res = randomizeFirstRow(row, col+1); } else{ res = randomizeFirstRow( row+1, 0); } //If the value inputed is valid, restart loop if(res == true){ return true; } } } } //If no value can be put in, set value to 0 to prevent program counting to 9 setGridValue(row, col, 0); //Return to previous method in stack return false; } This results in an ArrayIndexOutOfBoundsException with a ridiculously high or low number (+- 100,000). I've tried to see how far it goes into the method, and it never goes beyond this line: if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)) I don't understand how the array index goes so high. Can anyone help me out?

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • Android - Basic CRUD (REST/RPC client) to remote server

    - by bsreekanth
    There are lot of discussion about REST client implementation in android, based on the talk at GoogleIO 2010, by Virgil Dobjanschi. My requirement may not necessarily involved, as I have some freedom to choose the configuration. I only target tablets configuration changes can be prevented if no other easy way (fix at Landscape mode etc) I am trying to achieve. Basic CRUD operation to my server (JSON RPC/ REST). Basically mimic an ajax request from android app (no webview, need native app) Based on the above mentioned talk, and some reading I see these options. Implement any of the 3 mentioned in the Google IO talk Especially, the last pattern may be more suitable as I don't care much of caching. But not sure how "real time" is sync implementation. Use HTTP request in AsyncTask. Simplest, but need to avoid re-sending request during change in device configuration (orientation change etc). Even if I fix at one orientation, recreation of activity still happens. So need to handle it elegantly. Use service to handle http request. So far, it say use service for long ruiing request. Not sure whether it is a good approach for simple GET/POST/PUt requests. Please share your experience on what could be the best way.

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

  • How to catch YouTube embed code and turn into URL

    - by Jonathan Vanasco
    I need to strip YouTube embed codes down to their URL only. This is the exact opposite of all but one question on StackOverflow. Most people want to turn the URL into an embed code. This question addresses the usage patttern I want, but is tied to a specific embed code's regex ( Strip YouTube Embed Code Down to URL Only ) I'm not familiar with how YouTube has offered embeds over the years - or how the sizes differ. According to their current site, there are 2 possible embed templates and a variety of options. If that's it, I can handle a regex myself -- but I was hoping someone had more knowledge they could share, so I could write a proper regex pattern that matches them all and not run into endless edge-cases. The full use case scenario : user enters content in web based wysiwig editor backend cleans out youtube & other embed codes; reformats approved embeds into an internal format as the text is all converted to markdown. on display, appropriate current template/code display for youtube or other 3rd party site is generated At a previous company, our tech-team devised a plan where YouTube videos were embedded by listing the URL only. That worked great , but it was in a CMS where everyone was trained. I'm trying to create a similar storage, but for user-generated-content.

    Read the article

  • EF/LINQ: Where() against a property of a subtype

    - by ladenedge
    I have a set of POCOs, all of which implement the following simple interface: interface IIdObject { int Id { get; set; } } A subset of these POCOs implement this additional interface: interface IDeletableObject : IIdObject { bool IsDeleted { get; set; } } I have a repository hierarchy that looks something like this: IRepository<T <: BasicRepository<T <: ValidatingRepository<T (where T is IIdObject) I'm trying to add a FilteringRepository to the hierarchy such that all of the POCOs that implement IDeletableObject have a Where(p => p.IsDeleted == false) filter applied before any other queries take place. My goal is to avoid duplicating the hierarchy solely for IDeletableObjects. My first attempt looked like this: public override IQueryable<T> Query() { return base.Query().Where(t => ((IDeletableObject)t).IsDeleted == false); } This works well with LINQ to Objects, but when I switch to an EF backend I get: "LINQ to Entities only supports casting Entity Data Model primitive types." I went on to try some fancier parameterized solutions, but they ultimately failed because I couldn't make T covariant in the following case for some reason I don't quite understand: interface IQueryFilter<out T> // error { Expression<Func<T, bool>> GetFilter(); } I'd be happy to go into more detail on my more complicated solutions if it would help, but I think I'll stop here for now in hope that someone might have an idea for me to try. Thanks very much in advance!

    Read the article

  • Tagging in Subversion - how do I make the decision about continuing to work on my trunk vs. the new

    - by Howiecamp
    I'm running Tortoise SVN to manage a project. Obviously the principles around tagging apply to any implementation of SVN but in this question I'll be referring to some TortoiseSVN-specific dialog boxes and messages. My working directory and the subversion repository structure both have a Source root directory and the Trunk, Tags and Branches directories underneath. (I couldn't figure out how to do a multilevel indented hierarchy in markdown without using bullets, so if someone could edit and fix this I'd appreciate it.) I'm working out of the Trunk directory in my working copy and it's pointing at the Trunk directory in the repo. I want to apply a Tag "Release1" so I click the "Branch/tag..." menu option and set the repo path as my [repo_path/bla/Source/Tags/Release1" tag. This dialog box gives me the option to "Switch my working copy to new branch/tag". I understand that if this option is left unchecked, the new "Release1" branch under /Tags" will be created but my working copy will remain on the previous "Trunk" path. If I do check this option (or use the Switch command) I understand that my working copy will switch to the new "Release1" branch under "/Tags". Where I'm missing a concept is how to make this decision. It doesn't seem like I want to switch my working directory to the recently created tag since by definition (?) I want that tag to be a snapshot of my code as of a point in time. If I don't switch the working directory, I'll continue working off Trunk and when I'm ready to take another snapshot I'll make another tag. And so on... Am I understanding this right or am I stating something incorrectly in the previous paragraph (e.g. the statement about not wanting to switch to the tag since the tag should represent a point in time snapshot) or otherwise missing something regarding how to make this decision?

    Read the article

  • git changes modification time of files

    - by tanascius
    In the GitFaq I can read, that Git sets the current time as the timestamp on every file it modifies, but only those. However, I tried this command sequence (EDIT: added complete command sequence) $ git init test && cd test Initialized empty Git repository in d:/test/.git/ exxxxxxx@wxxxxxxx /d/test (master) $ touch filea fileb exxxxxxx@wxxxxxxx /d/test (master) $ git add . exxxxxxx@wxxxxxxx /d/test (master) $ git commit -m "first commit" [master (root-commit) fcaf171] first commit 0 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 filea create mode 100644 fileb exxxxxxx@wxxxxxxx /d/test (master) $ ls -l > filea exxxxxxx@wxxxxxxx /d/test (master) $ touch fileb -t 200912301000 exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 1 -rw-r--r-- 1 exxxxxxx Administ 132 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Dec 30 10:00 fileb exxxxxxx@wxxxxxxx /d/test (master) $ git status -a warning: LF will be replaced by CRLF in filea # On branch master warning: LF will be replaced by CRLF in filea # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: filea # exxxxxxx@wxxxxxxx /d/test (master) $ git checkout . exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 0 -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 fileb Now my question: Why did git change the timestamp of file fileb? I'd expect the timestamp to be unchanged. Are my commands causing a problem? Maybe it is possible to do something like a git checkout . --modified instead? I am using git version 1.6.5.1.1367.gcd48 under mingw32/windows xp.

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >