Search Results

Search found 40998 results on 1640 pages for 'setup project'.

Page 714/1640 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • running jar in a terminal using axis2

    - by Emilio
    I'm trying to run in the command line a java application distributed in a jar file. It call an axis2 web service, so the jar contains a /axis2client directory with rampart.mar security module. It works fine while I run it in netbeans, but it throws an exception if I try to run it in a terminal using this command: java -jar myfile.jar The Exception: org.apache.axis2.AxisFault: invalid url: //file:/home/xxx/Desktop/myfile.jar!/axis2client/ (java.net.MalformedURLException: no protocol: //file:/home/xxx/Desktop/myfile.jar) As you can see, it's trying to use the /axis2client directory inside the jar, as when I run it in Netbeans, but It fails with a MalformedURLException. I think it's something about the protocol 'file:', probably '//file:/' must be 'file:///'. The problem is that I cannot change this call to the directory because the method that loads the /axis2client directory it's not mine, it's from another library that use my project and include all the axis2 support. So, any idea?? Thanks in advance lads!

    Read the article

  • Is it a bad practice to store large files (10 MB) in a database?

    - by B Seven
    I am currently creating a web application that allows users to store and share files, 1 MB - 10 MB in size. It seems to me that storing the files in a database will significantly slow down database access. Is this a valid concern? Is it better to store the files in the file system and save the file name and path in the database? Are there any best practices related to storing files when working with a database? I am working in PHP and MySQL for this project, but is the issue the same for most environments (Ruby on Rails, PHP, .NET) and databases (MySQL, PostgreSQL).

    Read the article

  • How to implement Cache in web apps?

    - by Jhonnytunes
    This is really two questions. Im doing a project for the university for storing baseball players statitics, but from baseball data I have to calculate the score by year for the player who is beign displayed. The background is, lets say 10, 000 users hit the player "Alex Rodriguez", the application have to calculate 10, 000 the A-Rod stats by years intead of just read it from some where is temporal saved. Here I go: What is the best method for caching this type of data? Do I have to used the same database, and some temporal values on the same database, or create a Web Service for that? What reading about web caching so you recommend?

    Read the article

  • Java Application for handling records(CRUD)

    - by LivingThing
    I am new to JavaEE and am faced with a tight situation here. I have to develop a Java application for (CRUD) handling records and saving and loading an XML concerning that record. Obviously, I won't be asking you to do this for me. What I would be asking you is to give me some hints/pointer. Initially I thought JAXB would be enough for this but after putting a lot of time learning it and implementing the program I realized that it just can create the XML and read it but for update, delete I would have to do something else. Even if it wasn't for update and delete features requirement for my project I would still think that by just using JAXB is not a good implementation. I was wondering if "REST with Java (JAX-RS) using Jersey" should do the trick for me. ?

    Read the article

  • Support / Maintenance documentation for development team

    - by benwebdev
    Hi, I'm working in the Development dept (around 40 developers) for a large E-Commerce company. We've grown quickly but have not evolved very well in the field of documenting our work. We work with an Agile / Scrum-like methodology with our development and testing but documentation seems to be neglected. We need to be able to make documentation that would aid a developer who hasnt worked on our project before or was new to the company. We also have to create more high level information for our support department to explain any extra config settings and fixes of known issues that may arise, if any. Currently we put this in a badly put together wiki, based on an old Sharepoint / TFS site. Can anyone suggest some ideal links or advice on improving the documentation standard? What works in other companies? Has anyone got avice on developing documentation as part of an agile process? Many thanks, ben

    Read the article

  • When should a bullet texture be loaded in XNA?

    - by Bill
    I'm making a SpaceWar!-esque game using XNA. I want to limit my ships to 5 active bullets at any time. I have a Bullet DrawableGameComponent and a Ship DrawableGameComponent. My Ship has an array of 5 Bullet. What is the best way to manage the Bullet textures? Specifically, when should I be calling LoadTexture? Right now, my solution is to populate the Bullet array in the Ship's constructor, with LoadTexture being called in the Bullet constructor. The Bullet objects will be disabled/not visible except when they are active. Does the texture really need to be loaded once for each individual instance of the bullet object? This seems like a very processor-intensive operation. Note: This is a small-scale project, so I'm OK with not implementing a huge texture-management framework since there won't be more than half a dozen or so in the entire game. I'd still like to hear about scalable solutions for future applications, though.

    Read the article

  • Real-world SignalR example, ditching ghetto long polling

    - by Jeff
    One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava! If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here. Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR. A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you. While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam: // in the reply setup PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500); // called from the reply setup and the handler that fetches more posts PopForums.pollForNewPosts = function (topicID) { $.ajax({ url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID, type: "GET", dataType: "text", data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost, success: function (result) { var lastPostLoaded = result.toLowerCase() == "true"; if (lastPostLoaded) { $("#MorePostsBeforeReplyButton").css("visibility", "hidden"); } else { $("#MorePostsBeforeReplyButton").css("visibility", "visible"); clearInterval(PopForums.replyInterval); } }, error: function () { } }); }; What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug: Gross. The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end. Starting at the server side, here’s the hub: using Microsoft.AspNet.SignalR.Hubs; namespace PopForums.Messaging { public class Topics : Hub { public void ListenTo(int topicID) { Groups.Add(Context.ConnectionId, topicID.ToString()); } } } Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface: using Microsoft.AspNet.SignalR; using Topic = PopForums.Models.Topic; namespace PopForums.Messaging { public interface IBroker { void NotifyNewPosts(Topic topic, int lasPostID); } public class Broker : IBroker { public void NotifyNewPosts(Topic topic, int lasPostID) { var context = GlobalHost.ConnectionManager.GetHubContext<Topics>(); context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID); } } } The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving. _broker.NotifyNewPosts(topic, post.PostID); The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub: var hub = $.connection.topics; hub.client.notifyNewPosts = function (lastPostID) { PopForums.setReplyMorePosts(lastPostID); }; $.connection.hub.start().done(function () { hub.server.listenTo(topicID); }); The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID. This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

    Read the article

  • What to choose API based server or Socket based server for data driven application

    - by Imdad
    I am working on a project which has a Desktop Application for MAC/COCOA, a native application for iPhone another native application in iPad. All the application do almost same thing. The applications are data driven applications. Every communication to server is made via a restful API developed in PHP. When a user logs in a lot of data is fetched from server. And to remain in sync with server pooling is done. As there are lot of data to pool it makes application slower and un-reliable. A possible solution that comes into my mind is to use Socket based server. My question is that will it reasonably improve the performance? And which technology (of sockets) will be good as a server side solution for data driven application? I have heard a lot about Node.js. Please give your suggestions.

    Read the article

  • I need a webpage to host my javascript!

    - by Amir Reza
    Does anyone know a website that hosts javascripts on their page? I have a research project that needs to collect some RTT from all over the world and compare them together. I have written the javascript code for that but I do not have a high hit rate website to put it on to collect data. I know it is a little bit odd question to ask, but do you know any website or any trick that can help me? Note that the script would not do any harm to anybody! :-) Thanks, Decad is right, I basically need some people to put my script on their "high-hit rate" website ... so I can collect data from large number of clients... Of coarse, the script is run on the background with no harm to the page. It basically measures some RTT and submit it to a server. I already have some pages, but they barely got a hit from outside! Thanks,

    Read the article

  • Facebook link to facebook.com/company-page doesn't work

    - by Teo
    For the last 2 days I'm trying to find the reason why my previous setup, which was a link to my websites Facebook pages doesn't work anymore. I assume that I mistakenly changed something in the Facebook developer area, but I can't remember what it was. The bottom linked my previously to the Facebook.com/company-page, now the same Bottom links me just to Facebook.com. I guess I saw some redirect in the tab, but I'm not sure since it's too fast changing to Facebook.com. The original link in the footer is correct : <a href="http://facebook.com/company-page " target="_blank" class="facebook_ico"></a>. Any ideas?

    Read the article

  • New VS2012 Book: Pro Application Lifecycle Management with Visual Studio 2012

    - by Jakob Ehn
    During the spring/summer I have been involved with reviewing a new book about Visual Studio 2012 ALM from Apress called “Pro Application Lifecycle Management with Visual Studio 2012” The book is written by a fellow Visual Studio ALM MVP Mathias Olausson and his colleague Joachim Rossberg. It is a very comprehensive book that covers both all aspects of ALM in general and also how to implement these practices with Visual Studio 2012. The book also has several chapters dedicated to measuring your improvements by using ALM assessments and metrics. Read more about the book here on Mathias blog: http://msmvps.com/blogs/molausson/archive/2012/07/17/book-project-pro-application-lifecycle-management-with-visual-studio-2012-completed.aspx You can pre-order the book here at Amazon: http://www.amazon.com/Application-Lifecycle-Management-Visual-Professional/dp/1430243449/ Check it out!

    Read the article

  • Configuring mouse buttons to switch between apps?

    - by Matt Gregory
    I just installed 14.04, so I'm using the default setup (Unity, I guess). I have these two extra mouse buttons on the side of my mouse. Is there any way to map these so they can switch between open apps? What would be perfect is if clicking on button 6 (or whatever it is) would cycle forward through apps, button 7 would go backwards, and holding one of the buttons would show the task list and let you click on the app you want. That's really what I want.

    Read the article

  • How do you balance documentation requirements with Agile developments

    - by Jeremy
    In our development group there is currently discussions around agile and waterfal methodology. No-one has any practical experience with agile, but we are doing some reading. The agile manifesto lists 4 values: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan We are an internal development group developing applications for the consumption of other units in our enterprise. A team of 10 developers builds and releases multiple projects simultanously, typically with 1 - maybe 2 (rarely) developer on each project. It seems to be that from a supportability perspective the organization needs to put some real value on documentation - as without it, there are serious risks with resourcing changes. With agile favouring interactions, and software deliverables over processes and documentation, how do you balance that with the requirements of supportable systems and maintaining knowledge and understanding of how those systems work? With a waterfall approach which favours documentation (requirements before design, design specs before construction) it is easy to build a process that meets some of the organizational requirements - how do we do this with an agile approach?

    Read the article

  • Grant a user permissions on www-data owned /var/www

    - by George Pearce
    I have a simple web server setup for some websites, with a layout something like: site1: /var/www/site1/public_html/ site2: /var/www/site2/public_html/ I have previously used the root user to manage files, and then given them back to www-data when I was done (WordPress sites, needed for WP Uploads to work). This probably isn't the best way. I'm trying to find a way to create another user (lets call it user1) that has permission to edit files in site1, but not site2, and doesn't stop the files being 'owned' by www-data. Is there any way for me to do this?

    Read the article

  • How to integrate the .gdf with a specific exe for Games Explorer

    - by Kraemer
    Hello, I want to create an installer for a game and after that an icon to be put in Games Explorer for Win Vista and Win 7. I have created the GDF (game definitions file), then build the script for project and obtained the .h, GDF and .rc files. But i can't compile using Visual Studio 2010 the .rc file into an executable to be used after that to create the installer. Some error is popping up after i set the executable path "Could not load file or assembly'Microsoft.VisualStudio.HpcDebugger.Impl, Version 10.0.0.0, Culture=neutral, PublickKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified." Any ideas what i'm doing wrong ? I need to mention that i've never worked before with GDF Editor and Visual Studio. Any answer would be highly appreciated.Thanks!

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • Infrastructure and Platform As A Service in Private Cloud at Lawrence Livermore National Laboratory

    - by Anand Akela
    Scientists at the National Ignition Facility (NIF)— the world’s largest laser, at the Lawrence Livermore National Laboratory (LLNL)— need research environment that requires re-creating the physical environment and conditions that exist inside the sun. They have built private cloud infrastructure using Oracle VM and Oracle Enterprise Manager 12c to provision such an environment for research.  Tim Frazier of LLNL joined the "Managing Your Private Cloud With Oracle Enterprise Manager' session at Oracle Open World 2012 and discussed how the latest features in Oracle VM and Oracle Enterprise Manager 12c enables them to accelerate application provisioning in their private cloud. He also talked about how to increase service delivery agility, improve standardized roll outs, and do proactive management to gain total control of the private cloud environment. He also presented at the "Scene and Be Heard Theater" at Oracle OpenWorld 2012 and shared a lot of good information about his project and what they are doing in their private cloud environment. Learn more by looking at Tim's presentation .

    Read the article

  • Manage Your Amazon S3 Account with CloudBerry Explorer

    - by Mysticgeek
    If you have an Amazon S3 account you’re using to backup your data, you might want an easy way to manage it. CloudBerry Explorer is a free app that runs on your desktop an provides an easy way to manage your S3 account. Installation and Setup Just download and install the application with the defaults. When the application launches you’ll be prompted to enter in your username and email to get a registration key. Or you can continue on by clicking Register later. Now you will want to set up your Amazon S3 account. Click on File \ Amazon S3 Accounts. Double-click on the New Account icon.   Next enter in your Amazon account Access and Secret keys, select SSL if you want, then click the Test Connection button. Provided everything was entered correctly, you’ll see the Connection Success screen, just close out of it. Browse and Manage files Once you have your account setup through the Explorer, you can start viewing and managing your files on S3. The left pane shows your S3 buckets and stored files, while the right side shows your local computer. This allows you to manage your files in your Amazon S3 buckets directly from your desktop! It’s very easy to use, and you can drag and drop files from your computer to the S3 account or vice versa. There is also the ability to transfer files between Amazon S3 accounts from within the explorer. Go into Tools and Content Types and you can control the file types by adding, removing, or editing them. If you end up messing something up along the lines, you can always select Reset to defaults and everything will be back to normal. There is a multiple tabbed view so you can easily keep track of your different accounts and local machine. It allows the ability to create new storage buckets directly in the Explorer. Or you can delete buckets as well… Different actions can be accessed from the toolbars or by right-clicking and selecting from the context menu. Here we see a cool option that lets you move your data inside Amazon S3. It is faster and doesn’t cost money by moving the files to your computer first, then to another account. However, if you want data moved to your local machine first, you have that option as well.   Not all features are available in the free version, and if it’s not, you’ll be prompted to purchase a license for the Pro version. We will have a comprehensive review of the Pro version in the near future.    If you ever need help with CloudBerry Explorer, go to Tools \ Diagnostics. It will run a quick diagnostics check and you can send the information to the CloudBerry team for assistance. Delete Files from Amazon S3 To delete a file from you Amazon S3 account, simply highlight the files or folder you want to get rid of then click Delete on the toolbar. You can also right-click the file and select Delete from the Context Menu. Click Yes to the confirmation dialog box… Then you can watch the progress as your files are deleted in the bottom section of the explorer. Conclusion CloudBerry Explorer free version has several neat features that will allow you easy and basic control over you Amazon S3 account. The free version may be enough for basic users, but power users will want to upgrade to the pro version, as it includes a lot more features. Using the free version allows you to get a feel for what CloudBerry Explorer has to offer, and is a good starting point. Keep in mind that Amazon S3 is introducing Reduced Redundancy Storage which will lower the price of data stored. The price drops from $0.15 per GB to only $0.10 per GB. If you’re a Windows Home Server user, check out our review of CloudBerry Online Backup 1.5 for WHS. Download CloudBerry Explorer Free for Amazon S3 Similar Articles Productive Geek Tips CloudBerry Online Backup 1.5 for Windows Home ServerReopen Closed Tabs in Internet ExplorerPreview and Purchase Ebooks with Kindle for PCTroubleshoot and Manage Addons in Internet Explorer 8Beginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • Prewritten App for Used Car Dealer?

    - by Shawn Eary
    Is there somewhere I can find a prewritten WebApp (with database) for a used car dealer? The application would need to support the following: Easy setup in a low cost Shared or Cloud Host Give potential customers easy way to browse current inventory (cars on lot) with suggested prices Give dealership easy way to login and update inventory (cars on lot) and suggested prices Give potential customers easy way to send the dealership an inquiry about a specific vehicle on the lot with CAPTCHA style SPAM protection I prefer ASP.NET MVC and Microsoft SQL Server, but I might consider other technologies such as WebForms and LightSwitch (HTML5). I am reasonably comfortable with MVC and WebForms, but I really don't want to waste a bunch of time writing an application that might already exist. I did find a few interesting templates via Bing that seem to control CSS and Layout, but I'm not sure if they contain any business logic or if they would integrate well into an MVC App.

    Read the article

  • How do you develop web applications? [closed]

    - by ck3g
    How do you and/or your team develop your web applications? Language, framework or platform doesn't matter. I would like to know about the structure of your environment. For example: Using IDE on workstation and project files on remote host accessing via sftp. Files are saved instantly on remote host; All files are local and are uploaded on remote host during saving; Files are local, web server is running on local computer and is tested at local host. etc. You could write down also about the benefits of your approach, this will be useful for me. Thanks upd: Here must be a question and here it is: which is the best approach by your opinion?

    Read the article

  • How do I resolve "No JSON object could be decoded" on mythbuntu live CD?

    - by Neil
    I have been running a MythTV frontend on my laptop for some time against a MythTV backend installed in Linux Mint 12 on another computer, and everything works fine. Now, I'm trying out the Mythbuntu Live CD (12.04.1 32-bit) on the laptop, to turn it into a dedicated front end. It's connecting to the network just fine, and I can see my server. When I click on the frontend icon on the desktop, it asks me for the security code, which I've verified against mythtv-setup on the server. However, when I test that code, it shows the error message "No JSON object could be decoded". I've looked in the control center to see if there's something else I should be setting up. The message above implies to me that it can't find the server, but I can find no place in the control center to tell it where to find my myth backend, which I find a little odd. Does the live CD not work against a backend server on another machine?

    Read the article

  • 12.04 boots into terminal after first install. How to boot into GUI permanently?

    - by Deniz
    As a person with a quite limited CLI experience I congratulate myself on installing Ubuntu on an ancient non-pae Fujitsu Amilo M1425 thru the network with mini.iso. However upon reboot I'm met w/ the following: Ubuntu 12.04.1 LTS ubuntu-fujitsu tty1 ubuntu-fujitsu login: for which my specified login during setup is not accepted. (I'm quite sure its correct) Let's assume this screen is passed, how to start the GUI and make it the permanent option during boot? This box will return to a mostly comp-illiterate person, for which the existence of ubuntu will be an enough shock already. Wouldn't wanna leave him w/o the GUI. Other posts here mention the command startx but I probably need a login in the first place.. So "why won't it accept my login & how can I make the GUI-boot permanent?" is my question. Thanks in advance.

    Read the article

  • bash profile works for user but not sudo

    - by user564448
    I've modified my .profile to include a folder if a flash drive is plugged in. When running the command as the user it works fine but tells me the scrip must be run by sudo (this is what i want). However, when i try to run it with sudo i get "command not found" I have a symlink (flash) in my /var/www folder pointing to my /media/flash drive. (nevermind this setup since is just for dev) this is my user's .profile : # set PATH so it includes flash scripts if [ -d "/var/www/flash/scripts" ] ; then PATH="/var/www/flash/scripts:$PATH" fi when trying to run as sudo i get: sudo: script: command not found any ideas?

    Read the article

  • Oracle BI and EPM Partner Blogs

    - by Mike.Hallett(at)Oracle-BI&EPM
    Below is a simple list of some of our specialist Oracle BI and EPM Partner Blogs, where there is lots of great material and discussions.   http://www.aortabi.nl/news/ Netherlands http://www.clearpeaks.com/blog/ English http://www.peakindicators.com/index.php/knowledge-base English http://www.project.eu.com/blog/ English http://www.qubix.co.uk/insights English http://www.rittmanmead.com/blog/ English https://www.endecacommunity.com/ English   If you are a specialist OPN EMEA BI and EPM Partner with hints and tips to share, and would like your Blog to be added to this list, then just let me know @ [email protected].

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >