Search Results

Search found 19157 results on 767 pages for 'shared folder'.

Page 625/767 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • InstallShield 2009: Remove a dll (being used by explorer.exe for context menu) during uninstall with

    - by Samir
    Install Shield Premier 2009: Basic MSI Project I have a dll that is supposed to be removed during uninstall. But this dll is being used by explorer.exe, it is used for generating windows explorer context menus with their icons(same as the menu items, icons we see when we right click on any file/folder and see the WinRAR items, icons). Manually, I have to unregister the dll and close explorer.exe from task manager and again run explorer.exe. Then I can delete the dll. So from Install Shield how to delete this dll without any problem(as we see while uninstalling WinRAR or other software there is no problem no message boxes shown and the related context menu icons also removed).

    Read the article

  • MOD Rewrite Mask for Image URL

    - by user345426
    Okay so I am launching a cloned e-commerce site. I want to create a rewrite rule for the image folder for the second site to fetch images from the first site. RewriteRule ^alice.gif$ www.rhinomart.com/images/h_home.gif When I go to alice.gif directly through the browser it simply redirects me to the rhinomart.com URL and image. How do I prevent the redirect from occurring? When I go to http://www.acnbiz.net/alice.gif it should fetch alice.gif directly from Rhinomart.com/images and not redirect. is it possible???

    Read the article

  • problem in basic implementation of MVVM pattern - Services

    - by netmajor
    I watch some video and read articles about MVVM pattern and start thinking how should I implement it in my Silverlight app. So at first I create Silverlight application. I think that for clear view I create 3 folders: View - for each user control page in my app, ViewModel - for c# class which will querying date and Model- Entity Data Model of my SQL Server or Oracle Database. And now I am confused, cause I want to implement *WCF/RIA Services/Web services* in my project. In which folder should I put in class of services? I see in examples that Services take date and filtering it and then output data was binding in View - so It looks as ViewModel. But I was sure that someone use Services in Model and that I want to do. But how? Can someone explain me implementing Services as Model? Is my point of view at MVVM is correctly?

    Read the article

  • when using a FTPS connection to transfer a file, what is the difference between a 'Binary mode taran

    - by shaleen mohan
    I am using a FTPS connection to send a text file [this file will contain EDI(Electronic Data Interchange) information]to a mailbox INOVIS.I have configured the system to open a FTPS connection and using the PUT command I write the file to a folder on the FTP server. The problem is: what mode of file transfer should I use? How do I switch between modes? Moreover which mode is the 'best-practice' to use when transferring file over FTPS connection. If some one can provide me a small ftp script it would be helpful.

    Read the article

  • ASP.NET error when trying to configure ListView + SqlDatasource for a MySql Table

    - by stighy
    Hi, i'm using ASP.NET + a MySql Db. I'm trying to configure a ListView so i've written: <asp:SqlDataSource ID="dsDatiUtente" runat="server" ConnectionString="Server=12.28.136.29;Database=mydb;Uid=m111d1;Pwd=fake;Pooling=false;" ProviderName="MySql.Data.MySqlClient" SelectCommand="SELECT * FROM user WHERE idUser=@IdUser" / At the beginning of my aspx page i've added <%@ Import Namespace="MySql.Data.MySqlClient" %> But if i click to the sqldatasource and click "Refresh Schema" i got this error: "Unable to retrive schema.... Unable to find the requested .Net Framework data provider" For instance, i've installed it , but i've also uninstalled old version, then installed new versions. In my project i simple copy Mysql dll into "bin" folder, then add a reference to that dll. I'm not sure is the corrected way... I need to have the "refreshed schema" to permit vs.net to build automatically my listview ... if i can't to "auto build" listview, i have to write all code by hand, and it is a too expensive work me :( What i'm wrong ? Thank you!

    Read the article

  • Excluding files from web logs

    - by Ray
    Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea?

    Read the article

  • Is possible to auto-import a module from a diferent subfolder in other subfolder?

    - by mamcx
    I have a kind of plugin system, with this layout: - Python -- SDK -- Plugins ---- Plugin1 ---- Plugin2 All 3 have a __init__.py file. I wonder if is possible to be able to do import SDK from any plugin (as if SDK was in the site-packages folder). I'm in a situation where need to deploy, update, delete, add or change SDK files or any of the plugins under non-admin accounts, and wonder if I can get SDK in a clean way (I could sys.path.append in all plugins but I wonder if exist a better option). I imagine that using this in the Plugins init coulkd work: import sys import os ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__),'..')) print ROOT_DIR sys.path.append( ROOT_DIR ) But clearly is not executed this code (I imagine init was auto-magicalled executed in the load of the module :( )

    Read the article

  • Java - getClassLoader().getResource() driving me bonkers

    - by Click Upvote
    I have this test app: import java.applet.*; import java.awt.*; import java.net.URL; public class Test extends Applet { public void init() { URL some=Test.class.getClass().getClassLoader().getResource("/assets/pacman.png"); System.out.println(some.toString()); System.out.println(some.getFile()); System.out.println(some.getPath()); } } When I run it from Eclipse, I get the error: java.lang.NullPointerException at Test.init(Test.java:9) at sun.applet.AppletPanel.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Classpath (from .CLASSPATH file) <classpathentry kind="src" path="src"/> In my c:\project\src folder, I have only the Test.java file and the 'assets' directory which contains pacman.png. What am I doing wrong and how to resolve it?

    Read the article

  • JavaScript and PHP filename coding conventions

    - by Tower
    Hi, I would like to know the popular ways of naming files in JavaScript and PHP development. I am working on a JS+PHP system, and I do not know how to name my files. Currently I do for JS: framework/ framework/widget/ framework/widget/TextField.js (Framework.widget.TextField()) Framework.js (Framework()) So, my folders are lowercase and objects CamelCase, but what should I do when the folder/namespace requires more than one word? And what about PHP? jQuery seems to follow: jquery.js jquery.ui.js jquery.plugin-name.js so that it is jquery(\.[a-z0-9-])*\.js but ExtJS follows completely different approach. Douglas Crockford only gives us details about his preference for syntax conventions.

    Read the article

  • GetDiskFreeSpaceEx reports wrong number of free bytes

    - by rboorgapally
    __int64 i64FreeBytes unsigned __int64 lpFreeBytesAvailableToCaller, lpTotalNumberOfBytes, lpTotalNumberOfFreeBytes; // variables used to obtain // the free space on the drive GetDiskFreeSpaceEx (Manager.capDir, (PULARGE_INTEGER)&lpFreeBytesAvailableToCaller, (PULARGE_INTEGER)&lpTotalNumberOfBytes, (PULARGE_INTEGER)&lpTotalNumberOfFreeBytes); i64FreeBytes = lpTotalNumberOfFreeBytes; _tprintf(_T ("Number of bytes free on the drive:%I64u \n"), lpTotalNumberOfFreeBytes); I am working on a data management routine which is a Windows CE command line application. The above code shows how I get the number of free bytes on a particular drive which contains the folder Manager.capdir (it is the variable containing the full path name of the directory). My question is, the number of free bytes reported by the above code (the _tprintf statement) doesn't match with the number of free bytes of the drive (which i check by doing a right click on the drive). I wish to know if the reason for this difference?

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Cocoa app not launching on build & go but launching manually

    - by Matt S.
    I have quite the interesting problem. Yesterday my program worked perfectly, but now today I'm getting exc_bad_access when I hit build and go, but if I launch the app from the build folder it launches perfectly and there seems to be nothing wrong. The last bunch of lines from the debugger are: #0 0xffff07c2 in __memcpy #1 0x969f7961 in CFStringGetBytes #2 0x96a491b9 in CFStringCreateMutableCopy #3 0x991270cc in -[NSCFString mutableCopyWithZone:] #4 0x96a5572a in -[NSObject(NSObject) mutableCopy] #5 0x9913e6c7 in -[NSString stringByReplacingOccurrencesOfString:withString:options:range:] #6 0x9913e62f in -[NSString stringByReplacingOccurrencesOfString:withString:] #7 0x99181ad0 in -[NSScanner(NSDecimalNumberScanning) scanDecimal:] #8 0x991ce038 in -[NSDecimalNumberPlaceholder initWithString:locale:] #9 0x991cde75 in -[NSDecimalNumberPlaceholder initWithString:] #10 0x991ce44a in +[NSDecimalNumber decimalNumberWithString:] Why did my app work perfectly yesterday but not today?

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • Unable to commit to Subversion

    - by Ewan Makepeace
    I have a client who had to rebuild his automated build server. He checked out his project folder from my subversion server but is now no longer able to commit - he gets this error: Error: Commit failed (details follow): Error: Cannot write to the prototype revision file of transaction '551-1' because a Error: previous representation is currently being written by another process Finished!: I have searched Google but although this error has been often reported there is no clear explanation - does anyone on StackOverflow have a solution? UPDATE: Nobody else commits to that repository, so it was not a transaction stuck (at least not from another user). In the end we found that permissions were not set correctly. Not that you would know it from this message, but that fixed the problem.

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • BASH Install Of Wordpress, Without Visiting wp-admin/install.php

    - by user916825
    I wrote this little BASH script that creates a folder,unzips Wordpress and creates a database for a site. The final step is actually installing Wordpress, which usually involves pointing your browser to install.php and filling out a form in the GUI. I want to do this from the BASH shell, but can't figure out how to invoke wp_install() and pass it the parameters it needs: -admin_email -admin_password -weblog_title -user_name (line 85 in install.php) Here's a similar question, but in python #!/bin/bash #ask for the site name echo "Site Name:" read name # make site directory under splogs mkdir /var/www/splogs/$name dirname="/var/www/splogs/$name" #import wordpress from dropbox cp -r ~/Dropbox/Web/Resources/Wordpress/Core $dirname cd $dirname #unwrap the double wrap mv Core/* ./ rm -r Core mv wp-config-sample.php wp-config.php sed -i 's/database_name_here/'$name'/g' ./wp-config.php sed -i 's/username_here/root/g' ./wp-config.php sed -i 's/password_here/mypassword/g' ./wp-config.php cp -r ~/Dropbox/Web/Resources/Wordpress/Themes/responsive $dirname/wp-content/t$ cd $dirname CMD="create database $name" mysql -uroot -pmypass -e "$CMD" How do I alter the script to automatically run the installer without the need to open a browser?

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • What is the difference between clearcase and vss in label a release?

    - by raj
    Hi, We are using clearcase as our SCM. I have not much experience with clearcase. Now we are about to release our code to production. I want to label my code as I have done using VSS in my previous projects. But in clearcase labeling is not as easy as in VSS. clearcase is asking to create a label type before label a folder in VOB. I don't understand the concept of creating label type? Any guidance on this will be highly appreciated.

    Read the article

  • Django and ajax image file upload errors and csrf

    - by sharkfin
    I tried out Alex Kuhl's ajax script to upload images to Django 1.4. My first question is why I'm getting an empty page with firebug telling me I have two errors: fileuploader.js (line 4): syntax error <!DOCTYPE html> In my template html: qq is not defined var uploader = new qq.FileUploader( { Here is my entire html file for it: http://pastebin.com/NjbV5gMn This post suggests that either some script has 404'd or the src attribute is empty, which would cause the doctype error. But that doesn't seem to be the case here. As for why qq is not defined, I'm not sure what is wrong. Django can clearly find the fileuploader.js just fine from my static folder. My second question is why the ajax code uses {{ csrf_token }} instead of {% csrf_token %}. But if I use {% csrf_token %}, I get the firebug error: missing } after property list 'csrf_token': '<div style='display:none'<input type='hidden' name='csrfmiddlewaretoken' value='Cx0zFFak6OLgrHiAnFa3k4BPDmn4BgoT' /</div',

    Read the article

  • Android: Displaying Video on VideoView

    - by AndroidDev93
    I'm trying to display a video in my sdcard on the video view. Here is my code: String name = Environment.getExternalStorageDirectory() + "/test.mp4"; final VideoView videoView = (VideoView)findViewById(R.id.videoView1); videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { public void onPrepared(MediaPlayer arg0) { videoView.start(); } }); videoView.setVideoPath(name); The file I am trying to open is called test.mp4 and its located within the sdcard folder. I get an error saying the application has unfortunately stopped. I would appreciate it if someone could help me. Thanks. EDIT : I used the debugger and found out that I get an InvocationTargetException. The detailed message says that : Failure delivering result ResultInfo{who=null, request=1001, result=-1, data=Intent { dat=file:///mnt/sdcard/test.mp4 }} to activity : java.lang.NullPointerException EDIT : I looked at the logcat again and it seems to give the error at videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { I'm guessing either videoView or MediaPlayer is null.

    Read the article

  • Image through Script, How to add variable to URL

    - by Liso22
    I'm sure it's pretty straightforward since someone just solved a similar problem I had. I have a folder full of city images and need users to see the image corresponding to their city. Right now I'm getting the users location without problem using "geoip_city()" but I don't quite know how to integrate it into the URL of the image. All images have the following format: New York.jpg, Boston.jpg so I just need to make the script put the location before .jpg This is what I'm trying now: <img src="blank.png" id="image" > <script type="text/javascript"> document.getElementById('image').src = "Imagenes/grupos/' + geoip_city() + '.jpg"; </script> I believe I'm just messing with the quotes or something similar. Could anyone tell me what I'm doing wrong? Also this is where I'm testing it: http://chusmix.com/?page_id=1770 Thanks

    Read the article

  • Problem in integrating Wordpress blog's in Cakephp Website

    - by Nishant
    Hello Everyone, I was working on a site of Cakephp which was successfully delivered.But recently Client again appered and asked me to put the Wordpress blog in it,to cover up the Blogging thing in his site.He wants to share the authentication between the Cakephp and WP.Whoever registers in his site,then Logins in it and if he clicks on the Blog Tab,he must be redirected to the WP blog with the session still there.After some googling I have installed it in /app/webroot/blog folder but I am not able to edit the .htaccess file. Please help me in the right direction,that how to share the authentication betwenn Cake Php and Wordpress, and the second one how to customize the .htaccess file so that URL's look good. Thanks in advance..!

    Read the article

  • Highlights from the Oracle Customer Experience Summit @ OpenWorld

    - by Richard Lefebvre
    The Oracle Customer Experience Summit was the first-ever event covering the full breadth of Oracle's CX portfolio -- Marketing, Sales, Commerce, and Service. The purpose of the Summit was to articulate the customer experience imperative and to showcase the suite of Oracle products that can help our customers create the best possible customer experience. This topic has always been a very important one, but now that there are so many alternative companies to do business with and because people have such public ways to voice their displeasure, it's necessary for vendors to have multiple listening posts in place to gauge consumer sentiment. They need to know what is going on in real time and be able to react quickly to turn negative situations into positive ones. Those can then be shared in a social manner to enhance the brand and turn the customer into a repeat customer. The Summit was focused on Oracle's portfolio of products and entirely dedicated to customers who are committed to building great customer experiences within their businesses. Rather than DBAs, the attendees were business people looking to collaborate with other like-minded experts and find out how Oracle can help in terms of technology, best practices, and expertise. The event was at the Westin St. Francis Hotel in San Francisco as part of Oracle OpenWorld. We had eight hundred people attend, which was great for the first year. Next year, there's no doubt in my mind, we can raise that number to 5,000. Alignment and Logic Oracle's Customer Experience portfolio is made up of a combination of acquired and organic products owned by many people who are new to Oracle. We include homegrown Fusion CRM, as well as RightNow, Inquira, OPA, Vitrue, ATG, Endeca, and many others. The attendees knew of the acquisitions, so naturally they wanted to see how the products all fit together and hear the logic behind the portfolio. To tell them about our alignment, we needed to be aligned. To accomplish that, a cross functional team at Oracle agreed on the messaging so that every single Oracle presenter could cover the big picture before going deep into a product or topic. Talking about the full suite of products in one session produced overflow value for other products. And even though this internal coordination was a huge effort, everyone saw the value for our customers and for our long-term cooperation and success. Keynotes, Workshops, and Tents of Innovation We scored by having Seth Godin as our keynote speaker ? always provocative and popular. The opening keynote was a session orchestrated by Mark Hurd, Anthony Lye, and me. Mark set the stage by giving real-world examples of bad customer experiences, Anthony clearly articulated the business imperative for addressing these experiences, and I brought it all to life by taking the audience around the Customer Lifecycle and showing demos and videos, with partners included at each of the stops around the lifecycle. Brian Curran, a VP for RightNow Product Strategy, presented a session that was in high demand called The Economics of Customer Experience. People loved hearing how to build a business case and justify the cost of building a better customer experience. John Kembel, another VP for RightNow Product Strategy, held a workshop that customers raved about. It was based on the journey mapping methodology he created, which is a way to talk to customers about where they want to make improvements to their customers' experiences. He divided the audience into groups led by facilitators. Each person had the opportunity to engage with experts and peers and construct some real takeaways. The conference hotel was across from Union Square so we used that space to set up Innovation Tents. During the day we served lunch in the tents and partners showed their different innovative ideas. It was very interesting to see all the technologies and advancements. It also gave people a place to mix and mingle and to think about the fringe of where we could all take these ideas. Product Portfolio Plus Thought Leadership Of course there is always room for improvement, but the feedback on the format of the conference was positive. Ninety percent of the sessions had either a partner or a customer teamed with an Oracle presenter. The presentations weren't dry, one-way information dumps, but more interactive. I just followed up with a CEO who attended the conference with his Head of Marketing. He told me that they are using John Kembel's journey mapping methodology across the organization to pull people together. This sort of thought leadership in these highly competitive areas gives Oracle permission to engage around the technology. We have to differentiate ourselves and it's harder to do on the product side because everyone looks the same on paper. But on thought leadership ? we can, and did, take some really big steps. David Vap Group Vice President Oracle Applications Product Development

    Read the article

  • Eclipse antRunner command line build has wrong dependency build order

    - by Sam Jones
    My team's Eclipse RCP app builds fine from the Eclipse IDE. When I try to build it from the command line, using the antRunner application, it builds the plugins out of order -- a plugin builds before it's dependencies are built, and so can't resolve some of the needed classes. Where should I look to fix this? As far as I can tell, the dependencies are set up correctly (a feature that depends on a plugin that depends on another plugin). I have my build.properties and folder structure set up as specified here. Is there anything else I should look at?

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >