Search Results

Search found 38458 results on 1539 pages for 'remote access'.

Page 516/1539 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • Grails 1.2.0 not finding plugins in the default repository.

    - by Padraic
    I do not know what changed in my environment, but all of a sudden I can not pull any plugins from the default repository. I went through the _*.groovy scripts and nothing has changed in my grails home directory and it appears that the default repository url is set correctly (DEFAULT_PLUGIN_DIST = "http://plugins.grails.org"). I am assuming it is an environment setting that changed on me, because if I switch to an old version of grails that I have installed, 1.1.1 for example, list-plugins is returning a full list of plugins. When I run grails list-plugins in my current 1.2.0 environment I get the following output: Welcome to Grails 1.2.0 - http://grails.org/ Licensed under Apache Standard License 2.0 Grails home is set to: /opt/grails-1.2.0 Base Directory: /Users/padraic/Projects/TestApplicationMachine Resolving dependencies... Dependencies resolved in 1633ms. Running script /opt/grails-1.2.0/scripts/ListPlugins_.groovy Environment set to development Reading remote plugin list ... Plug-ins available in the core repository are listed below: hibernate <1.3.0.RC2 -- Hibernate for Grails tomcat <1.3.0.RC2 -- Apache Tomcat plugin for Grails webflow <1.3.0.RC2 -- Spring Web Flow Plugin Reading remote plugin list ... Plug-ins available in the default repository are listed below: spock <0.4-groovy-1.7-SNAPSHOT -- Spock Integration - spockframework.org Plug-ins you currently have installed are listed below: cloud-foundry 0.2 -- Cloud Foundry Plugin for Grails hibernate 1.2.0 -- Hibernate for Grails tomcat 1.2.0 -- Apache Tomcat plugin for Grails I find it very strange that it only finds the spock plugin. It makes me thing that either a)it is going to the wrong repository or b)my version setting is incorrect. Any ideas? Thanks, Padraic

    Read the article

  • Why ClassCastException on JMS ConnectionFactory lookup in JNDI?

    - by Derek Mahar
    What might be the cause of the following ClassCastException in a standalone JMS client application when it attempts to retrieve a connection factory from the JNDI provider? Exception in thread "main" java.lang.ClassCastException: javax.naming.Reference cannot be cast to javax.jms.ConnectionFactory Here is an abbreviated version of the JMS client that includes only its start() and stop() methods. The exception occurs on the first line in method start() which attempts to retrieve the connection factory from the JNDI provider, a remote LDAP server. The JMS connection factory and destination objects are on a remote JMS server. class JmsClient { private ConnectionFactory connectionFactory; private Connection connection; private Session session; private MessageConsumer consumer; private Topic topic; public void stop() throws JMSException { consumer.close(); session.close(); connection.close(); } public void start(Context context, String connectionFactoryName, String topicName) throws NamingException, JMSException { // ClassCastException occurs when retrieving connection factory. connectionFactory = (ConnectionFactory) context.lookup(connectionFactoryName); connection = connectionFactory.createConnection("username","password"); session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); topic = (Topic) context.lookup(topicName); consumer = session.createConsumer(topic); connection.start(); } private static Context getInitialContext() throws NamingException, IOException { String filename = "context.properties"; Properties props = new Properties(); props.load(new FileInputStream(filename)); return new InitialContext(props); } }

    Read the article

  • video calling (center)

    - by rrejc
    We are starting to develop a new application and I'm searching for information/tips/guides on application architecture. Application should: read the data from an external (USB) device send the data to the remote server (through internet) receive the data from the remote server perform a video call with to the calling (support) center receive a video call call from the calling (support) center support touch screens In addition: some of the data should also be visible through the web page. So I was thinking about: On the server side: use the database (probably MS SQL) use ORM (nHibernate) to map the data from the DB to the domain objects create a layer with business logic in C# create a web (WCF) services (for client application) create an asp.net mvc application (for item 7.) to enable data view through the browser On the client side I would use WPF 4 application which will communicate with external device and the wcf services on the server. So far so good. Now the problem begins. I have no idea how to create a video call (outgoing or incoming) part of the application. I believe that there is no problem to communicate with microphone, speaker, camera with WPF/C#. But how to communicate with the call center? What protocol and encoding should be used? I think that I will need to create some kind of server which will: have a list of operators in the calling center and track which operator is occupied and which operator is free have a list of connected end users receive incoming calls from end users and delegate call to free operator delegate calls from calling center to the end user Any info, link, anything on where to start would be much appreciated. Many thanks!

    Read the article

  • Windows Azure ASP.NET MVC Role behaves strangely when redirecting from HTTP to HTTPS

    - by Rinat Abdullin
    Subj. I've got an ASP.NET 2 MVC Worker Role Application, that does not differ much from the default template. When attempting redirect from HTTP to HTTPS (this happens when we access constroller secured by the usual RequireSSL attribute implementation) we get blank page with "Bad Request" message. IntelliTrace shows this: Thrown: "The file '/Views/Home/LogOnUserControl.aspx' does not exist." (System.Web.HttpException) Call stack is really short: [External Code] App_Web_vfahw7gz.dll!ASP.views_shared_site_master.__Render__control1(System.Web.UI.HtmlTextWriter __w = {unknown}, System.Web.UI.Control parameterContainer = {unknown}) [External Code] App_Web_bsbqxr44.dll!ASP.views_home_index_aspx.ProcessRequest(System.Web.HttpContext context = {unknown}) [External Code] User control reference is the usual one in /Views/Shared/Site.Master: <div id="logindisplay"> <% Html.RenderPartial("LogOnUserControl"); %> </div> And partial view LogOnUserControl.ashx is located in Views/Shared (and it is ASHX, not ASPX). Problem shows up, when we try to access site pages, that require auth and redirect. These pages are secured by RequireSSL attribute (Redirect == true): [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)] public sealed class RequireSslAttribute : FilterAttribute, IAuthorizationFilter { public bool Redirect { get; set; } // Methods public void OnAuthorization(AuthorizationContext filterContext) { // this get's messy, when we are running custom ports // within the local dev fabric. // hence we disable code in the debug #if !DEBUG if (filterContext == null) { throw new ArgumentNullException("filterContext"); } if (filterContext.HttpContext.Request.IsSecureConnection) return; var canRedirect = string.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase); if (canRedirect && Redirect) { var builder = new UriBuilder { Scheme = "https", Host = filterContext.HttpContext.Request.Url.Host, Path = filterContext.HttpContext.Request.RawUrl }; filterContext.Result = new RedirectResult(builder.ToString()); } else { throw new HttpException(0x193, "Access forbidden. The requested resource requires an SSL connection."); } #endif } } Obviously we compile in RELEASE for this case. Does anybody have any idea, what could cause this strange exception and how to get rid of it?

    Read the article

  • Is it possible to read data that has been separately copied to the Android sd card without having ro

    - by icecream
    I am developing an application that needs to access data on the sd card. When I run on my development device (an odroid with Android 2.1) I have root access and can construct the path using: File sdcard = Environment.getExternalStorageDirectory(); String path = sdcard.getAbsolutePath() + File.separator + "mydata" File data = new File(path); File[] files = data.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String filename) { return filename.toLowerCase().endsWith(".xyz"); }}); However, when I install this on a phone (2.1) where I do not have root access I get files == null. I assume this is because I do not have the right permissions to read the data from the sd card. I also get files == null when just trying to list files on /sdcard. So the same applies without my constructed path. Also, this app is not intended to be distributed through the app store and is needs to use data copied separately to the sd card so this is a real use-case. It is too much data to put in res/raw (I have tried, it did not work). I have also tried adding: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> to the manifest, even though I only want to read the sd card, but it did not help. I have not found a permission type for reading the storage. There is probably a correct way to do this, but I haven't been able to find it. Any hints would be useful.

    Read the article

  • Building an extension framework for a Rails app

    - by obvio171
    I'm starting research on what I'd need in order to build a user-level plugin system (like Wordpress plugins) for a Rails app, so I'd appreciate some general pointers/advice. By user-level plugin I mean a package a user can extract into a folder and have it show up on an admin interface, allowing them to add some extra configuration and then activate it. What is the best way to go about doing this? Is there any other opensource project that does this already? What does Rails itself already offer for programmer-level plugins that could be leveraged? Any Rails plugins that could help me with this? A plugin would have to be able to: run its own migrations (with this? it's undocumented) have access to my models (plugins already do) have entry points for adding content to views (can be done with content_for and yield) replace entire views or partials (how?) provide its own admin and user-facing views (how?) create its own routes (or maybe just announce its presence and let me create the routes for it, to avoid plugins stepping on each other's toes) Anything else I'm missing? Also, is there a way to limit which tables/actions the plugin has access to concerning migrations and models, and also limit their access to routes (maybe letting them include, but not remove routes)? P.S.: I'll try to keep this updated, compiling stuff I figure out and relevant answers so as to have a sort of guide for others.

    Read the article

  • Design considerations for temporarily transforming a player into an animal in a role playing game

    - by mikedev
    I am working on a role playing game for fun and to practice design patterns. I would like players to be able to transform themselves into different animals. For example, a Druid might be able to shape shift into a cheetah. Right now I'm planning on using the decorator pattern to do this but my question is - how do I make it so that when a druid is in the cheetah form, they can only access skills for the cheetah? In other words, they should not be able to access their normal Druid skills. Using the decorator pattern it appears that even in the cheetah form my druid will be able to access their normal druid skills. class Druid : Character { // many cool druid skills and spells void LightHeal(Character target) { } } abstract class CharacterDecorator : Character { Character DecoratedCharacter; } class CheetahForm : CharacterDecorator { Character DecoratedCharacter; public CheetahForm(Character decoratedCharacter) { DecoratedCharacter= decoratedCharacter; } // many cool cheetah related skills void CheetahRun() { // let player move very fast } } now using the classes Druid myDruid = new Druid(); myDruid.LightHeal(myDruid); // casting light heal here is fine myDruid = new CheetahForm(myDruid); myDruid.LightHeal(myDruid); // casting here should not be allowed Hmmmm...now that I think about it, will myDruid be unable to us the Druid class spells/skills unless the class is down-casted? But even if that's the case, is there a better way to ensure that myDruid at this point is locked out from all Druid related spells/skills until it is cast back to a Druid (since currently it's in CheetahForm)

    Read the article

  • How to solve with using Flex Builder 3 and BlazeDS?

    - by Teerasej
    Hi, everyone. Thank you for interesting in my question, I think you can help me out from this little problem. I am using Flex builder 3, BlazeDS, and Java with Spring and Hibernate framework. I using the remote object to load a string from spring's configuration files. But in testing, I found this fault event like this: RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null" I have check the configuration in remote-config.xml and services-config.xml. But it looks good. There is some people talked about this problem around the internet and I think you can help me and them. I am using these environment: Flex Builder 3 BlazeDS 3.2.0 JBoss server Full stacktrace: [RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null"] at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal::faultHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AbstractInvoker.as:220] at mx.rpc::Responder/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\Responder.as:53] at mx.rpc::AsyncRequest/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AsyncRequest.as:103] at NetConnectionMessageResponder/statusHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\channels\NetConnectionChannel.as:569] at mx.messaging::MessageResponder/status()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\MessageResponder.as:222]

    Read the article

  • Problem with JMX query of Coherence node MBeans visible in JConsole

    - by Quinn Taylor
    I'm using JMX to build a custom tool for monitoring remote Coherence clusters at work. I'm able to connect just fine and query MBeans directly, and I've acquired nearly all the information I need. However, I've run into a snag when trying to query MBeans for specific caches within a cluster, which is where I can find stats about total number of gets/puts, average time for each, etc. The MBeans I'm trying to access programatically are visible when I connect to the remote process using JConsole, and have names like this: Coherence:type=Cache,service=SequenceQueue,name=SEQ%GENERATOR,nodeId=1,tier=back It would make it more flexible if I can dynamically grab all type=Cache MBeans for a particular node ID without specifying all the caches. I'm trying to query them like this: QueryExp specifiedNodeId = Query.eq(Query.attr("nodeId"), Query.value(nodeId)); QueryExp typeIsCache = Query.eq(Query.attr("type"), Query.value("Cache")); QueryExp cacheNodes = Query.and(specifiedNodeId, typeIsCache); ObjectName coherence = new ObjectName("Coherence:*"); Set<ObjectName> cacheMBeans = mBeanServer.queryMBeans(coherence, cacheNodes); However, regardless of whether I use queryMBeans() or queryNames(), the query returns a Set containing... ...0 objects if I pass the arguments shown above ...0 objects if I pass null for the first argument ...all MBeans in the Coherence:* domain (112) if I pass null for the second argument ...every single MBean (128) if I pass null for both arguments The first two results are the unexpected ones, and suggest a problem in the QueryExp I'm passing, but I can't figure out what the problem is. I even tried just passing typeIsCache or specifiedNodeId for the second parameter (with either coherence or null as the first parameter) and I always get 0 results. I'm pretty green with JMX — any insight on what the problem is? (FYI, the monitoring tool will be run on Java 5, so things like JMX 2.0 won't help me at this point.)

    Read the article

  • Yet Another crosdomain.xml question or: "How to interpret documentation correctly"

    - by cboese
    Hi! I have read a lot about the new policy-policy of flash player and also know the master policy file. Now image the following situation: There are two servers with services (http) running at custom ports servera.com:2222/websiteA serverb.com:3333/websiteB Now I open a swf from server a (eg. servera.com:2222/websiteA/A.swf) that wants to access the service of serverb. Of course I need a crossdomain.xml at the right place and there are multiple variations possible. I dont want to use a master policy file, as I might not have control over the root of both servers. One solution I found works with the following crossdomain: <?xml version="1.0"?> <cross-domain-policy> <allow-access-from domain="*"/> </cross-domain-policy> served at serverb.com:3333/websiteB/crossdomain.xml So now for my question: Is it possible to get rid of the "*" and use a proper (not as general as *) domainname in the allow-access-from rule? All my attempts failed, and from what I understand it should be possible.

    Read the article

  • Problem calling web service from within JBOSS EJB Service

    - by Rob Goodwin
    I have a simple web service sitting on our internal network. I used SOAPUI to do a bit of testing, generated the service classes from the WSDL , and wrote some java code to access the service. All went as expected as I was able to create the service proxy classes and make calls. Pretty simple stuff. The only speed bump was getting java to trust the certificate from the machine providing the web service. That was not a technical problem, but rather my lack of experience with SSL based web services. Now onto my problem. I coded up a simple EJB service and deployed it into JBoss Application Server 4.3 and now get the following error in the code that previously worked. 12:21:50,235 WARN [ServiceDelegateImpl] Cannot access wsdlURL: https://WS-Test/TestService/v2/TestService?wsdl I can access the wsdl file from a web browser running on the same machine as the application server using the URL in the error message. I am at a loss as to where to go from here. I turned on the debug logs in JBOSS and got nothing more than what I showed above. I have done some searching on the net and found the same error in some questions, but those questions had no answers. The web services classes where generated with JAX-WS 2.2 using the wsimport ant task and placed in a jar that is included in the ejb package. JBoss is deployed in RHEL 5.4, where the standalone testing was done on Windows XP.

    Read the article

  • jQuery Validation error...

    - by Povylas
    Hi, I have been struggling with this jQuery Validation Plugin. Here is the code: <script type="text/javascript"> $(function() { var validator = $('#signup').validate({ errorElement: 'span', rules: { username: { required: true, minlenght: 6 //remote: "check-username.php" }, password: { required: true, minlength: 5 }, confirm_password: { required: true, minlength: 5, equalTo: "#password" }, email: { required: true, email: true }, agree: "required" }, messages: { username: { required: "Please enter a username", minlength: "Your username must consist of at least 6 characters" //remote: "Somenoe have already chosen nick like this." }, password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long" }, confirm_password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long", equalTo: "Please enter the same password as above" }, email: "Please enter a valid email address", agree: "Please accept our policy" } }); var root = $("#wizard").scrollable({size: 1, clickable: false}); // some variables that we need var api = root.scrollable(); $("#data").click(function() { validator.form(); }); // validation logic is done inside the onBeforeSeek callback api.onBeforeSeek(function(event, i) { if($("#signup").valid() == false){ return false; }else{ return true; } $("#status li").removeClass("active").eq(i).addClass("active"); }); //if tab is pressed on the next button seek to next page root.find("button.next").keydown(function(e) { if (e.keyCode == 9) { // seeks to next tab by executing our validation routine api.next(); e.preventDefault(); } }); $('button.fin').click(function(){ parent.$.fn.fancybox.close() }); }); </script> And here is the error: $.validator.methods[method] is undefined http://www.vvv.vhost.lt/js/jquery-validate/jquery.validate.min.js Line 15 I am completely confused... Maybe some kind of handler is needed? I would be grateful for any kind of answer.

    Read the article

  • How to configure outgoing connections from an SQL stored procedure?

    - by Peter Vestberg
    I am working on a .NET project which uses Microsoft SQL server. In this project, I need a CLR stored procedure (written in C#) that uses a remote web service. So, when the stored procedure is executed on the SQL server, it makes web service calls and thus sends packets to a remote location. The problem is that when executing the SP I get: "System.Net.WebException: The request failed with HTTP status 403: Forbidden." The database user has full permission, the deployed CLR assembly and SP are even marked "unsafe", I tried signing it etc., so any of that is not causing the problem. When I am executing the very same C# code, but from a simple console application instead of as a SP, it all works fine. So I started to suspect a network related problem and had a packet sniffer running when executing both the SP and the console app version. What I realized was that the packets sent out had different destination IP addresses: the console app sent the packets directly to the web service IP while the SP sent the packets to a proxy server we use in our company. Due to network policies the latter is not allowed and that explains the "403 Forbidden" exception. So my question boils down to this: How can I configure the SP/MS SQL server to NOT use that proxy? I want it to send the packets directly to the web service IP, just like the test console app. (again, the C# code is the same , so it's not a programming matter). I've disabled all proxy settings in Internet Explorer in case the SQL server inherits these settings or something. However, no luck. Any help would be greatly appreciated! Best regards, Peter

    Read the article

  • How do I use a custom authentication mechanism for a Java web application with Spring Security?

    - by Adam
    Hi, I'm working on a project to convert an existing Java web application to use Spring Web MVC. As a part of this I will migrate the existing log-on/log-off mechanism to use Spring Security. The idea at this stage is to replicate the existing functionality and replace only the web layer, leaving the service classes and objects in place. The required functionality is simple. Access is controlled to URLs and to access certain pages the user must log on. Authentication is performed with a simple username and password along with an extra static piece of information that comes from the login page. There is no notion of a role: once a user has logged on they have access to all of the pages. Behind the scenes, the service layer has a class with a simple authentication method: doAuthenticate(String username, String password, String info) throws ServiceException An exception is thrown if the login fails. I'd like to leave this existing service object that does the authentication intact but to "plug it into" the Spring Security mechanism. Can somebody suggest the best approach to take for this please? Naturally, I'd like to take the path of least resistance and leave the work where possible to Spring... Thanks in advance, Adam.

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • Which Code Should Go Where in MVC Structure

    - by Oguz
    My problem is in somewhere between model and controller.Everything works perfect for me when I use MVC just for crud (create, read, update, delete).I have separate models for each database table .I access these models from controller , to crud them . For example , in contacts application,I have actions (create, read, update, delete) in controller(contact) to use model's (contact) methods (create, read, update, delete). The problem starts when I try to do something more complicated. There are some complex processes which I do not know where should I put them. For example , in registering user process. I can not just finish this process in user model because , I have to use other models too (sending mails , creating other records for user via other models) and do lots of complex validations via other models. For example , in some complex searching processes , I have to access lots of models (articles, videos, images etc.) Or, sometimes , I have to use apis to decide what I will do next or which database model I will use to record data So where is the place to do this complicated processes. I do not want to do them in controllers , Because sometimes I should use these processes in other controllers too. And I do not want to put these process in models because , I use models as database access layers .May be I am wrong,I want to know . Thank you for your answer .

    Read the article

  • Ideas for a rudimentary software licensing implementation

    - by Ross
    I'm trying to decide how to implement a very basic licensing solution for some software I wrote. The software will run on my (hypothetical) clients' machines, with the idea being that the software will immediately quit (with a friendly message) if the client is running it on greater-than-n machines (n being the number of licenses they have purchased). Additionally, the clients are non-tech-savvy to the point where "basic" is good enough. Here is my current design, but given that I have little to no experience in the topic, I wanted to ask SO before I started any development on it: A remote server hosts a MySQL database with a table containing two columns: client-key and license quantity The client-side application connects to the MySQL database on startup, offering it's client-key that I've put into a properties file packaged into the distribution (I would create a new distribution for each new client) Chances are, I'll need a second table to store validation history, so that with some short logic, the software can decide if it can be run on a given machine (maybe a sliding window of n machines using the software per 24 hours) If the software cannot establish a connection to the MySQL database, or decides that it's over the n allowed machines per day, it closes The connection info for the remote server hosting the MySQL database should be hard-coded into the app? (That sounds like a bad idea, but otherwise they could point it to some other always-validates-to-success server) I think that about covers my initial design. The intent being that while it certainly isn't full-proof, I think I've made it at least somewhat difficult to create an easily-sharable cracking solution. Also, I can easily adjust the license amount for a given client/key pair. I gotta figure this has been done a million times before, so tell me about a better solution that's just as simple to implement and provides the same (low) amount of security. In the event that external libraries are used, I prefer Java, as that's what the software has been written in.

    Read the article

  • java class creation dynamically and make it accessible across the network different jvms i.e. serial

    - by inj.rav
    Hi. I have a requirement of creating java classes dynamically and make it accessible different jvms across the network. I tried to use reflection and javassist tool,but nothing worked. Let me explain the scenario we are using Coherence distributed cache. It has a power of doing aggregation/filtering in parallel across the cluster. For example if a class has [dynamic class] has amount variable and getAmount/setAmount methods. Then if we execute COHERENCE queries, it will start process in parallel across the cluster. I tried to create classes at run time by using javassist and reflection. I am able to access it from single JVM, but when I tried to access the same class from other jvm [through coherence cluster]. I am getting exception of class not found [as remote jvm is not having idea of this class].I can over come this by creating same class dynamically on remote jvm also and access the methods. But coherence in built methods/functions are not able to find the class. could some one help me on this matter

    Read the article

  • Spring MVC with annotations: how to beget that method always is called

    - by TheStijn
    hi, I'm currently migrating a project that is using Spring MVC without annotations to Spring MVC with annotations. This is causing less problems than expected but I did come across one issue. In my project I have set up an access mechanisme. Whether or not a User has access to a certain view depends on more than just the role of the User (e.g. it also depends on the status of the entity, the mode (view/edit), ...). To address this I had created an abstract parent controller which has a method hasAccess. This method calls also other methods like getAllowedEditStatuses which are here and there overridden by the child controllers. The hasAccess method gets called from the showForm method (below code was minimized for your readability): @Override protected ModelAndView showForm(final HttpServletRequest request, final HttpServletResponse response, final BindException errors) throws Exception { Integer id = Integer.valueOf(request.getParameter("ID")); Project project = this.getProject(id); if (!this.hasAccess(project, this.getActiveUser())) { return new ModelAndView("errorNoAccess", "code", project != null ? project.getCode() : null); } return this.showForm(request, response, project, errors); } So, if the User has no access to the view then he gets redirected to an error page. Now the 'pickle': how to set this up when using annotations. There no longer is a showForm or other method that is always called by the framework. My (and maybe your) first thought was: simply call this method from within each controller before going to the view. This would of course work but I was hoping for a nicer, more generic solution (less code duplication). The only other solution I could think of is preceeding the hasAccess method with the @ModelAttribute annotation but this feels a lot like raping the framework :-). So, does anyone have a (better) idea? thanks, Stijn

    Read the article

  • Designing secure consumer blackberry application

    - by Kiran Kuppa
    I am evaluating a requirement for a consumer blackberry application that places high premium on security of user's data. Seems like it is an insurance company. Here are my ideas on how I could go about it. I am sure this would be useful for others who are looking for similar stuff Force the user to use device password. (I am guessing that this would be possible - though not checked it yet). Application can request notifications when the device is about to be locked and just after it has been unlocked. Encryption of application specific data can be managed at those times. Application data would be encrypted with user's password. User's credentials would be encrypted with device password. Remote backup of the data could be done over HTTPS (any better ideas are appreciated) Questions: What if the user forgets his device password. If the user forgets his application password, what is the best and secure way to reset the password? If the user losses the phone, remote backup must be done and the application data must be cleaned up. I have some ideas on how to achieve (3) and shall share them. There must be an off-line verification of the user's identity and the administrator must provide a channel using which the user must be able to send command to the device to perform the wiping of application data. The idea is that the user is ALWAYS in control of his data. Without the user's consent, even the admin must not be able to do activities such as cleaning up the data. In the above scheme of things, it appears as if the user's password need not be sent over the air to server. Am I correct? Thanks, --Kiran Kumar

    Read the article

  • Debugging JSR 168 Portlet with spring, eclipse & pluto.

    - by mikep
    I am trying to set up a development environment to test Spring Portlet MVC for development of JSR 168 conforming portlets. I have the latest STS installed, which included Spring 2.5 and Eclipse (Catalina). This has been my environment to develop with Spring MVC, and that works fine using Apache as a local server for debugging. I found some instructions on the Pluto portal site on using Pluto as a remote debugging host for portlets. I have implemented those instructions. I am sending Eclipse into debug mode by right clicking on one of the JSPs and going into "debug as". My problem is that when I log into Pluto, it is not sending me into debug mode. I am seeing the default Pluto page as opposed to my portlet. My portlet has not been installed onto Pluto, and the instructions do not seem to require the portlet to be installed. To help, I have a screen shot at http://www.ceruleaninc.ca/pluto%5Fproblem.jpg, showing the following: Eclipse showing the remote debugging to localhost:8000 Tomcat showing the "Listening for transport dt_socket at address: 8000 The Catalina.bat jpda start command The Pluto Portal screen after log in Thanks much! I would welcome any advice on approaches to debugging portlets. I am not tied to pluto. There does seem to be a lack of detailed instructions on this topic.

    Read the article

  • Directly accessing the modem in Windows Mobile

    - by kigurai
    For some reasons I need to be able to access the internal modem of a Windows Mobile smartphone (a HTC s740 with WM version 6.1). What I want is to be able to access it like it was a serial port in order to give AT-commands. I have code that uses the TAPI Line interface and lineGetID() to get a "handle" on which I shuld be able to do ReadFile()/WriteFile(). Sadly I have not gotten it to work. What I do currently is: Initialize TAPI with lineInitializeEx() Open the Line with lineOpen() Iterate through each available device and get info. Currently I am selecting the "UNIMODEM"/"Hayes compatible on COM1" device. But maybe I should choose the "TAPI cellular service"/"Cellular Line" instead? I have tried the "Cellular Line" device with the same result. Use lineGetID() on the selected device to get a handle. Do WriteFile("AT\r") and then directly do a ReadFile(), which should give me a "OK" back if it really was the modem I accessed. Realize that it doesn't work and get annoyed... But this has so far been a no-go. Does anyone have any idea on how to do it? I am doing this in Native WIN32 C++ on Windows Mobile 6 SDK. UPDATE: I have so far managed to get a data connection between two phones using RIL, which gives me a serial port handle to write and read from. BUT, I still would like to be able to interact directly with the modem to send AT-commands. So, the bounty I am starting only concerns getting direct access to the modem in order to give AT-commands. My investigations so far indicates that this was possible in previous versions of Windows Mobile (by opening COM2 and/or COM9 and slaying RIL, or something like that) but I have not yet seen code which works on WM6.

    Read the article

  • Can I use the auto-generated Linq-to-SQL entity classes in 'disconnected' mode?

    - by Gary McGill
    Suppose I have an automatically-generated Employee class based on the Employees table in my database. Now suppose that I want to pass employee data to a ShowAges method that will print out name & age for a list of employees. I'll retrieve the data for a given set of employees via a linq query, which will return me a set of Employee instances. I can then pass the Employee instances to the ShowAges method, which can access the Name & Age fields to get the data it needs. However, because my Employees table has relationships with various other tables in my database, my Employee class also has a Department field, a Manager field, etc. that provide access to related records in those other tables. If the ShowAges method were to invoke any of those methods, this would cause lots more data to be fetched from the database, on-demand. I want to be sure that the ShowAges method only uses the data I have already fetched for it, but I really don't want to have to go to the trouble of defining a new class which replicates the Employee class but has fewer methods. (In my real-world scenario, the class would have to be considerably more complex than the Employee class described here; it would have several 'joined' classes that do need to be populated, and others that don't). Is there a way to 'switch off' or 'disconnect' the Employees instances so that an attempt to access any property or related object that's not already populated will raise an exception? If not, then I assume that since this must be a common requirement, there might be an already-established pattern for doing this sort of thing?

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >