Search Results

Search found 9864 results on 395 pages for 'solo developer'.

Page 332/395 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Parse XML function names and call within whole assembly

    - by Matt Clarkson
    Hello all, I have written an application that unit tests our hardware via a internet browser. I have command classes in the assembly that are a wrapper around individual web browser actions such as ticking a checkbox, selecting from a dropdown box as such: BasicConfigurationCommands EventConfigurationCommands StabilizationCommands and a set of test classes, that use the command classes to perform scripted tests: ConfigurationTests StabilizationTests These are then invoked via the GUI to run prescripted tests by our QA team. However, as the firmware is changed quite quickly between the releases it would be great if a developer could write an XML file that could invoke either the tests or the commands: <?xml version="1.0" encoding="UTF-8" ?> <testsuite> <StabilizationTests> <StressTest repetition="10" /> </StabilizationTests> <BasicConfigurationCommands> <SelectConfig number="2" /> <ChangeConfigProperties name="Weeeeee" timeOut="15000" delay="1000"/> <ApplyConfig /> </BasicConfigurationCommands> </testsuite> I have been looking at the System.Reflection class and have seen examples using GetMethod and then Invoke. This requires me to create the class object at compile time and I would like to do all of this at runtime. I would need to scan the whole assembly for the class name and then scan for the method within the class. This seems a large solution, so any information pointing me (and future readers of this post) towards an answer would be great! Thanks for reading, Matt

    Read the article

  • Deploying Application with mvc in shared hosting server

    - by ankita-13-3
    We have created an MVC web application in asp.net 3.5, it runs absolutely fine locally but when we deploy it on godaddy hosting server (shared hosting), it shows an error which is related to trust level problem. We contacted godaddy support and they say, that we only support medium trust level application. So how to convert my application in medium trust level. Do I need to make changes to web.config file. It shows the following error : Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request failed.] System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Assembly asm, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +150 System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Object assemblyOrString, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +100 System.Security.CodeAccessSecurityEngine.CheckSetHelper(PermissionSet grants, PermissionSet refused, PermissionSet demands, RuntimeMethodHandle rmh, Object assemblyOrString, SecurityAction action, Boolean throwException) +284 System.Security.PermissionSetTriple.CheckSetDemand(PermissionSet demandSet, PermissionSet& alteredDemandset, RuntimeMethodHandle rmh) +69 System.Security.PermissionListSet.CheckSetDemand(PermissionSet pset, RuntimeMethodHandle rmh) +150 System.Security.PermissionListSet.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +30 System.Threading.CompressedStack.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +40 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, CompressedStack securityContext) +123 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, Resolver accessContext) +41 Look forward to your help. Regards Ankita Software Developer Shakti Informatics Pvt. Ltd. Web Template Hub

    Read the article

  • what is wrong in java AES decrypt function?

    - by rohit
    hi, i modified the code available on http://java.sun.com/developer/technicalArticles/Security/AES/AES_v1.html and made encrypt and decrypt methods in program. but i am getting BadpaddingException.. also the function is returning null.. why it is happing?? whats going wrong? please help me.. these are variables i am using: kgen = KeyGenerator.getInstance("AES"); kgen.init(128); raw = new byte[]{(byte)0x00,(byte)0x11,(byte)0x22,(byte)0x33,(byte)0x44,(byte)0x55,(byte)0x66,(byte)0x77,(byte)0x88,(byte)0x99,(byte)0xaa,(byte)0xbb,(byte)0xcc,(byte)0xdd,(byte)0xee,(byte)0xff}; skeySpec = new SecretKeySpec(raw, "AES"); cipher = Cipher.getInstance("AES"); plainText=null; cipherText=null; following is decrypt function.. public String decrypt(String cipherText) { try { cipher.init(Cipher.DECRYPT_MODE, skeySpec); byte[] original = cipher.doFinal(cipherText.getBytes()); plainText = new String(original); } catch(BadPaddingException e) { } return plainText; }

    Read the article

  • Quick help refactoring Ruby Class

    - by mplacona
    I've written this class that returns feed updates, but am thinking it can be further improved. It's not glitchy or anything, but as a new ruby developer, I reckon it's always good to improve :-) class FeedManager attr_accessor :feed_object, :update, :new_entries require 'feedtosis' def initialize(feed_url) @feed_object = Feedtosis::Client.new(feed_url) fetch end def fetch @feed_object.fetch end def update @updates = fetch end def updated? @updates.new_entries.count > 0 ? true : false end def new_entries @updates.new_entries end end As you can see, it's quite simple, but the things I'm seeing that aren't quite right are: Whenever I call fetch via terminal, it prints a list with the updates, when it's really supposed return an object. So as an example, in the terminal if I do something like: client = Feedtosis::Client.new('http://stackoverflow.com/feeds') result = client.fetch I then get: <Curl::Easy http://stackoverflow.com/feeds> Which is exactly what I'd expect. However, when doing the same thing with "inniting" class with: FeedManager.new("http://stackoverflow.com/feeds") I'm getting the object returning as an array with all the items on the feed. Sure I'm doing something wrong, so any help refactoring this class will he greatly appreciated. Also, I'd like to see comments about my implementation, as well as any sort of comment to make it better would be welcome. Thanks in advance

    Read the article

  • Static variable not initialized

    - by Simon Linder
    Hi all, I've got a strange problem with a static variable that is obviously not initialized as it should be. I have a huge project that runs with Windows and Linux. As the Linux developer doesn't have this problem I would suggest that this is some kind of wired Visual Studio stuff. Header file class MyClass { // some other stuff here ... private: static AnotherClass* const Default_; }; CPP file AnotherClass* const Default_(new AnotherClass("")); MyClass(AnotherClass* const var) { assert(Default_); ... } Problem is that Default_is always NULL. I also tried a breakpoint at the initialization of that variable but I cannot catch it. There is a similar problem in another class. CPP file std::string const MyClass::MyString_ ("someText"); MyClass::MyClass() { assert(MyString_ != ""); ... } In this case MyString_is always empty. So again not initialized. Does anyone have an idea about that? Is this a Visual Studio settings problem? Cheers Simon

    Read the article

  • Will IntelliTrace(tm) (historical debugging) be available for unmanaged c++ in future versions of Vi

    - by Tim
    I love the idea of historical debugging in VS 2010. However, I am really disappointed that unmanaged C++ is left out. IntelliTrace supports debugging Visual Basic and C# applications that use .NET version 2.0, 3.0, 3.5, or 4. You can debug most applications, including applications that were created by using ASP.NET, Windows Forms, WPF, Windows Workflow, and WCF. IntelliTrace does not support debugging C++, script, or other languages. Debugging of F# applications is supported on an experimental basis. (editorial) [This is really poor support in my opinion. .NET is less in need of this assistance than unmanaged c++. I an getting a little tired of the status of plain old C++ and its second-class status in the MS tools world. Yes, I realize it is probably WAAY easier to implement this with .NET and MS are pushing .NET as the future, and yes, I know that C++ is an "old" language, but that does not diminish the fact that there are lots of C++ apps out there and there will continue to be more apps built with C++. I sincerely hope MS has not dropped C++ as a supported developer tool/language- that would be a shame.] Does anyone know if there are plans for it to support C++?

    Read the article

  • Version Control and Coding Formatting

    - by Martin Giffy D'Souza
    Hi, I'm currently part of the team implementing a new version control system (Subversion) within my organization. There's been a bit of a debate on how to handle code formatting and I'd like to get other peoples opinions and experiences on this topic. We currently have ~10 developers each using different tools (due to licensing and preference). Some of these tools have automatic code formatters and others don't. If we allow "blind" checkins the code will look drastically different each time someone does a check in. This will make things such as diffs and merges complicated. I've talked to several people and they've mentioned the following solutions: Use the same developer program with the same code formatter (not really an option due to licensing) Have a hook (either client or server side) which will automatically format the code before going into the repository Manually format the code. Regarding the 3rd point, the concept is to never auto-format the code and have some standards. Right now that seems to be what we're leaning towards. I'm a bit hesitant on that approach as it could lead to developers spending a lot of time manually formatting code. If anyone can please provide some their thoughts and experience on this that would be great. Thank you, Martin

    Read the article

  • ClassCastException while using service

    - by Sebi
    I defined a local Service: public class ComService extends Service implements IComService { private IBinder binder = new ComServiceBinder(); public class ComServiceBinder extends Binder implements IComService.IServiceBinder { public IComService getService() { return ComService.this; } } public void test(String msg) { System.out.println(msg); } @Override public IBinder onBind(Intent intent) { return binder; } } The corresponding interface: public interface IComService { public void test(String msg); public interface IServiceBinder { IComService getService(); } } Then i try to bind the service in another activity in another application, where the same interface is available: bindService(new Intent("ch.ifi.csg.games4blue.gamebase.api.ComService"), conn, Context.BIND_AUTO_CREATE); and private ServiceConnection conn = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { Log.i("INFO", "Service bound " + name); comService = ((IComService.IServiceBinder)service).getService(); serviceHandler.sendEmptyMessage(0); } @Override public void onServiceDisconnected(ComponentName arg0) { Log.i("INFO", "Service Unbound "); } }; but the line comService = ((IComService.IServiceBinder)service).getService(); always throws a 05-02 22:12:55.922: ERROR/AndroidRuntime(622): java.lang.ClassCastException: android.os.BinderProxy I can't explain why, I followed the app sample on http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/LocalServiceBinding.html Any hints would be nice!

    Read the article

  • Merging changes to a workspace with uncommitted changes

    - by Kim L
    We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code? A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.

    Read the article

  • Tips on managing dependencies for a release?

    - by Andrew Murray
    Our system comprises many .NET websites, class libraries, and a MSSQL database. We use SVN for source control and TeamCity to automatically build to a Test server. Our team is normally working on 4 or 5 projects at a time. We try to lump many changes into a largish rollout every 2-4 weeks. My problem is with keeping track of all the dependencies for a rollout. Example: Website A cannot go live until we've rolled out Branch X of Class library B, built in turn against the Trunk of Class library C, which needs Config Updates Y and Z and Database Update D, which needs Migration Script E... It gets even more complex - like making sure each developer's project is actually compatible with the others and are building against the same versions. Yes, this is a management issue as much as a technical issue. Currently our non-optimal solution is: a whiteboard listing features that haven't gone live yet relying on our memory and intuition when planning the rollout, until we're pretty sure we've thought of everything... a dry-run on our Staging environment. It's a good indication but we're often not sure if Staging is 100% in sync with Live - part of the problem I'm hoping to solve. some amount of winging it on rollout day. So far so good, minus a few close calls. But as our system grows, I'd like a more scientific release management system allowing for more flexibility, like being able to roll out a single change or bugfix on it's own, safe in the knowledge that it won't break anything else. I'm guessing the best solution involves some sort of version numbering system, and perhaps using a project management tool. We're a start-up, so we're not too hot on religiously sticking to rigid processes, but we're happy to start, providing it doesn't add more overhead than it's worth. I'd love to hear advice from other teams who have solved this problem.

    Read the article

  • How do I use Perl's WWW::Facebook::API to publish to a user's newsfeed?

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • The project is not configured for Facelets yet in RAD 8.0

    - by Jyoti
    I am trying to create JSF 2 pages. When I create pages using facelets template I get message on top that "The project is not configured for Facelets yet. You need to add a Facelets runtime to the project's classpath". I created file called Test1.xhtml <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11 /DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com /jsf/facelets" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com /jsf/html"> <h:head> <title>Test1</title> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <meta name="GENERATOR" content="Rational® Application Developer for WebSphere® Software" /> </h:head> <h:body> Test </h:body> </html> When I run this I see same content of file in explorer, instead of Test. Also Page code is not created for it.

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • How to add share menu item to Gallery by code

    - by Anthony
    I know how to implement this issue by Menuifest.xml, see also: Google Android Developer Group related issue But my question is how to add share menu of Gallery by java code not Menuifest.xml. My code is as below: public class MyActivity extends Activity { private static final String TAG = "MyActivity"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); IntentFilter intentFilter = new IntentFilter(); intentFilter.addAction(Intent.ACTION_SEND); intentFilter.addCategory(Intent.CATEGORY_DEFAULT); try { intentFilter.addDataType("image/*"); } catch (MalformedMimeTypeException e) { Log.e(TAG, e.toString()); } Intent x = registerReceiver(new BroadcastReceiver() { public void onReceive(Context context, Intent intent) { Log.d(TAG, "Received intent "+intent); intent.setComponent(new ComponentName(context, Uploader.class)); startActivity(intent); } }, intentFilter); if (x==null) Log.i(TAG, "failed to regist a receiver"); else Log.i(TAG, "registed a receiver successfully"); // ... But registerReceiver always return null, and there is no menu added to Gallery's Share. Thank you. Anthony Xu

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • Should we migrate from svn to Team Foundation Server 2010?

    - by Florian
    We are with 6 developer and currently use Visual Studio 2008 Professional with SVN and Visual SVN. As soon as vs2010 is released we will upgrade from vs2008 pro to vs2010 premium. However if Team Foundation Server has a proper source control included in vs2010 premium, then it does make sense to use it. We like SVN, but like tight integration of tools even better. On the internet information on SVN versus TFS 2010 seems to be scarce. Hence my question here. EDIT: This video looks very compelling. Is this marketing talk or real? Thank you all for your replies! I absolutely appreciate this. A little more background info. This is our current stack; vs2008 pro, Visual SVN, SVN, Jetbrain Teamcity. My main problem is that we use a lot of tools from different vendors which more or less integrate. Sometime more, mostly less. At least it takes a lot of time to set it up correctly. We currently do not use branches, but we want to. Therefore we have to set up SVN from scratch (we looked into it carefully). So let me rephrase my question: Should we set up SVN or start using TFS?

    Read the article

  • Code Own Socket Server or Use Red5/ElectroServer on Amazon EC2?

    - by Travis
    I've been thinking for a long time about working on a multiplayer game in Flash. I need updates frequently enough that ajax requests won't work so I need to use a socket server. The system will eventually have enough objects/players that I would consider it an MMO. I would like to set up a scalable system on Amazon's EC2. (Which probably effects my choice of server) This architecture would hopefully allow the game to grow without many changes over time. (Using a domain decomposition technique or something similar) Heres my internal debate: Should I a. Code my own socket server in C++ or Java? b. Use the free and open source Red5 socket server for Flash? or c. Pay the licensing fees and go for Electroserver? I consider myself a decent developer, but am at an impasse as to what road to go down. I'm not sure if I, could develop/would need, the features of one of the prepackaged socket servers. I'm also not sure if the prepackaged servers would work well in an Amazon EC2 environment and take full advantage of its features. Any help or guidance would be greatly appreciated.

    Read the article

  • Cannot read value from SYS_CONTEXT

    - by AppleGrew
    I have a PL/SQL procedure which sets some variable in user session, like the following:- Dbms_Session.Set_Context( NAMESPACE =>'MY_CTX', ATTRIBUTE => 'FLAG_NAME', Value => 'some value'); Just after this (in the same procedure), I try to read the value of this flag, using:- SYS_CONTEXT('MY_CTX', 'FLAG_NAME'); The above returns nothing. How did the DB lose this value? The weirder part is that if I invoke this proc directly from Oracle SQL Developer then it works. It doesn't work when I invoke this proc from my web application from callable statement. --EDIT-- Added an example as to how we are invoking the proc from our Java code. String statement = "Begin package_name.proc_name( flag_val => :1); END;"; OracleCallableStatement st = <some object by some framework> .createCallableStatement(statement); st.setString(1, 'flag value'); st.execute(); st.close();

    Read the article

  • What is best practice (and implications) for packaging projects into JAR's?

    - by user245510
    What is considered best practice deciding how to define the set of JAR's for a project (for example a Swing GUI)? There are many possible groupings: JAR per layer (presentation, business, data) JAR per (significant?) GUI panel. For significant system, this results in a large number of JAR's, but the JAR's are (should be) more re-usable - fine-grained granularity JAR per "project" (in the sense of an IDE project); "common.jar", "resources.jar", "gui.jar", etc I am an experienced developer; I know the mechanics of creating JAR's, I'm just looking for wisdom on best-practice. Personally, I like the idea of a JAR per component (e.g. a panel), as I am mad-keen on encapsulation, and the holy-grail of re-use accross projects. I am concerned, however, that on a practical, performance level, the JVM would struggle class loading over dozens, maybe hundreds of small JAR's. Each JAR would contain; the GUI panel code, necessary resources (i.e. not centralised) so each panel can stand alone. Does anyone have wisdom to share?

    Read the article

  • Query notation for the sitecore 'source' field in template builder

    - by M.R.
    I am trying to set the the source field of a template using the query notation (or xpath - whichever works), but none of them seems to be working. My content tree is a multisite content tree: France --Page 1 ----Page1A -------Page1AA --Page 2 --Page 3 --METADATA ----Regions US --Page 1 ----Page1A -------Page1AA --Page 2 --Page 3 --METADATA ----Regions Each site has its own METADATA folder, and I want it so that when adding a page inside each of the main country nodes, I want the values to reflect whatever is in the METADATA of that site. I have two different fields for now - a droplink and a treelistex field. So I thought I can just get the parent item that is a country site, and get the metadata folder for that. When I put the following query in both the fields, I get different results: query:./ancestor::*[@@templatename='CountryHome']/METADATA/Regions/* For the droplink field, I get only the first Region (one item) For the treelistex field, I get the entire content tree I then tried to modify the query a little bit and took the 'query' notation out ./ancestor::*[@@templatename='CountryHome']/METADATA/Regions/* If I go to the developer center/xpath builder, and set the context node to any item underneath the main country site, it returns me exactly what I need, but when I put this in the source, I get the entire content tree in both the cases. Help!

    Read the article

  • how to use git rebase to clean up a convoluted history

    - by lsiden
    After working for several weeks with a half dozen different branches and merges, on both my laptop and work and my desktop at home, my history has gotten a bit convoluted. For example, I just did a fetch, then merged master with origin/master. Now, when I do git show-branches, the output looks like this: ! [login] Changed domain name. ! [master] Merge remote branch 'origin/master' ! [migrate-1.9] Migrating to 1.9.1 on Heroku ! [rebase-master] Merge remote branch 'origin/master' ---- - - [master] Merge remote branch 'origin/master' + + [master^2] A bit of re-arranging and cleanup. - - [master^2^] Merge branch 'rpx-login' + + [master^2^^2] Commented out some debug logging. + + [master^2^^2^] Monkey-patched Rack::Request#ip + + [master^2^^2~2] dump each request to log .... I would like to clean this up with a git rebase. I created a new branch, rebase-master, for this purpose, and on this branch tried git rebase <common-ancestor>. However, I have to resolve many conflicts, and the end result on branch rebase-master no longer matches the corresponding version on master, which has already been tested and works! I thought I saw a solution to this somewhere but can't find it anymore. Does anyone know how to do this? Or will these convoluted ref names go away when I start deleting un-needed branches that I have already merged with? I am the sole developer on this project, so there is no one else who will be affected.

    Read the article

  • How does browser know when to prompt user to save password?

    - by Eric
    This is related to the question I asked here: http://stackoverflow.com/questions/2382329/how-can-i-get-browser-to-prompt-to-save-password This is the problem: I CAN'T get my browser to prompt me to save the password for the site I'm developing. (I'm talking about the bar that appears sometimes when you submit a form on Firefox, that says "Remember the password for yoursite.com? Yes / Not now / Never") This is super frustrating because this feature of Firefox (and most other modern browsers, which I hope work in a similar fashion) seems to be a mystery. It's like a magic trick the browser does, where it looks at your code, or what you submit, or something, and if it "looks" like a login form with a username (or email address) field and a password field, it offers to save. Except in this case, where it's not offering my users that option after they use my login form, and it's making me nuts. :-) (I checked my Firefox settings-- I have NOT told the browser "never" for this site. It should be prompting.) My question: exactly what the heuristics are that Firefox (or any other modern browser) uses to know when it should prompt the user to save? This shouldn't be too difficult to answer, since it's right there in the Mozilla source (I don't know where to look or else I'd try to dig it out myself). You'd think there would be a blog post or some other similar developer note from the Mozilla developers about this but I can't find that either. (* Note that if your answer to me has anything to do with cookies, encryption or anything else that is about how I'm storing the user's passwords in the database, you've probably misread my question. :-)

    Read the article

  • Standardizing a Release/Tools group on a specific language

    - by grahzny
    I'm part of a six-member build and release team for an embedded software company. We also support a lot of developer tools, such as Atlassian's Fisheye, Jira, etc., Perforce, Bugzilla, AnthillPro, and a couple of homebrew tools (like my Django release notes generator). Most of the time, our team just writes little plugins for larger apps (ex: customize workflows in Anthill), long-term utility scripts (package up a release for QA), or things like Perforce triggers (don't let people check into a specific branch unless their change description includes a bug number; authenticate against Active Directory instead of Perforce's internal passwords). That's about the scale of our problems, although we sometimes tackle something slightly more sizable. My boss, who is reasonably technical, has asked us to standardize on one or two languages so we can more easily substitute for each other. He's advocating bash scripts and Perl, due to their universality and simplicity. I can see his point--we mostly do "glue", so why not use "glue" languages rather than saddle ourselves with something designed for much larger projects? Since some of the tools we work with are Java-based, we do need to use something that speaks JVM sometimes. (The path of least resistance for these projects is BeanShell and Groovy.) I feel a tremendous itch toward language advocacy, but I'm trying to avoid saying "We should use Python 'cause I like it and Perl is gross." Instead, I'm trying to come up with a good approach to defining our problem set: what problems do we solve with scripts? Would we benefit from a library of common functions by our team, or are most of our projects more isolated? What is it reasonable to expect my co-workers to learn? What languages give us the most ease of development and ease of modification? Can you folks suggest some useful ways to approach this problem, both for my own thinking process and to help me facilitate some brainstorming among my coworkers?

    Read the article

  • JW Player can not play m3u8 stream?

    - by why
    with check this example, http://developer.longtailvideo.com/player/branches/adaptive/test/provider.html , I tried the example myself, There is my code: <html> <head> <script type="text/javascript" src="jwplayer.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <title>Provider tests</title> <style> body { padding: 50px; font: 13px/20px Arial; background: #EEE; } form { margin-top: 20px; } #player { -webkit-box-shadow: 0 0 5px #999; background: #000; } ul { margin-top: 40px; padding: 0 0 0 20px; list-style-type: square; } </style> </head> <body> Test M3U8 <div id="player">You need Flash to play these tests</div> <script type="text/javascript"> jwplayer("player").setup({ file: '../m3u8/index.m3u8', flashplayer: 'player.swf', provider:'adaptiveProvider.swf', height: 360, width: 640 }); function loadStream(url) { jwplayer("player").load({file: url,provider: 'adaptiveProvider.swf'}); jwplayer("player").play(); return false; } $(document).ready(function() { loadStream('http://localhost/m3u8/index.m3u8'); }); </script> <ul id="streamlist"></ul> <div id="panel"></div> </body> </html> But the Jw Play can not work BTW: my vlc can play http://localhost/m3u8/index.m3u8 well

    Read the article

  • Google Web Toolkit Asynchronous Call from a Service Implementation

    - by Thor Thurn
    I'm writing a simple Google Web Toolkit service which acts as a proxy, which will basically exist to allow the client to make a POST to a different server. The client essentially uses this service to request an HTTP call. The service has only one asynchronous method call, called ajax(), which should just forward the server response. My code for implementing the call looks like this: class ProxyServiceImpl extends RemoteServiceServlet implements ProxyService { @Override public Response ajax(String data) { RequestBuilder rb = /*make a request builder*/ RequestCallback rc = new RequestCallback() { @Override public void onResponseReceived(Response response) { /* Forward this response back to the client as the return value of the ajax method... somehow... */ } }; rb.sendRequest(data, requestCallback); return /* The response above... except I can't */; } } You can see the basic form of my problem, of course. The ajax() method is used asynchronously, but GWT decides to be smart and hide that from the dumb old developer, so they can just write normal Java code without callbacks. GWT services basically just do magic instead of accepting a callback parameter. The trouble arises, then, because GWT is hiding the callback object from me. I'm trying to make my own asynchronous call from the service implementation, but I can't, because GWT services assume that you behave synchronously in service implementations. How can I work around this and make an asynchronous call from my service method implementation?

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >