Search Results

Search found 19157 results on 767 pages for 'shared folder'.

Page 306/767 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Setup shortcut keys not working

    - by Tim
    In my Ubuntu 12.04, in keyboard settings, I didn't find a shortcut key for restarting X, so in "Customer Shorcut", I set up Ctrl+Alt+Backspace for command sudo restart lightdm. But after that the shortcut doesn't work. Is it because it requires root privilege? Also I have a SysRq key on my keyboard, which I think to be the "Magic SysReq Key". My SysRq key is shared with PrtSc key (for screen shoot), and is in blue which means I have to press Fn key at the same time to invoke SysRq instead of PrtSC. But every time I press Fn+SysRq, it always shoots a photo of the screen, same as just hitting PrtSc i.e. without hitting Fn. I wonder how to use the Magic SysReq Key? Does it mean the shortcut has not been linked to any command that is supposed for Magic SysReq Key yet? PS: My laptop is Lenovo T400 and OS is Ubuntu 12.04. Thanks!

    Read the article

  • Oracle Contributor Agreements - New Home!

    - by alexismp
    The Oracle Contributor Agreement (the successor to the SCA), now has a new home - www.oracle.com/technetwork/goto/oca and a new email address (listed on this aforementioned page). This is the one-stop shop for getting to the actual OCA, the document you are required to sign in order to participate with shared copyright in Oracle-led open source projects, the list of existing signatories, as well as an updated version of the FAQ This earlier post on this topic has some context on the contributor agreement and where is came from. Note that if you are contributing to GlassFish and/or a sub-project (Jersey, OpenMQ, Grizzly, etc....), a single agreement can cover all of your contributions.

    Read the article

  • Sharing Authentication Across Subdomains using cookies

    - by Jordan Reiter
    I know that in general cookies themselves are not considered robust enough to store authentication information. What I am wondering is if there is an existing design pattern or framework for sharing authentication across subdomains without having to use something more complex like OpenID. Ideally, the process would be that the user visits abc.example.org, logs in, and continues on to xyz.example.org where they are automatically recognized (ideally, the reverse should also be possible -- a login via xyz means automatic login at abc). The snag is that abc.example.org and xyz.example.org are both on different servers and different web application frameworks, although they can both use a shared database. The web application platforms include PHP, ColdFusion, and Python (Django), although I'm also interested in this from a more general perspective (i.e. language agnostic).

    Read the article

  • Fusion CRM Data Integration and Migration from Conemis (D)

    - by Richard Lefebvre
    Conemis Data Integration Tools edited for Oracle Fusion CRM offers easy-to-use and pre-configured tools for data integration, data quality, and migration of data from Oracle CRM on Demand and third-party applications to Oracle Fusion CRM Conemis solution includes: Pressure Fueling of data for Fusion CRM Migration covered from legacy to Fusion CRM Data Quality in migration and integration Intuitive Data Housekeeping for IT and Sales Backups of Fusion CRM environments Conemis's solution benefits include Fusion CRM integrated out-of-the-box, connection to other applications, ready-made data mapping, instant availability without installation, fully configurable, shared use in integration expert groups, one GUI for several environments/pods, reduced costs & risks in migration projects, etc. Conemis AG, a German-based data integration company founded in 2009, offers Software and services solution and expertize for Oracle CRM products's data migration and integration. For more details, please contact Dr. Daniel Rolli ([email protected]) www.conemis.com.

    Read the article

  • Help with Kodak esp 3250 printer driver on Lubuntu 12.1 SOLVED!

    - by user108608
    First my system: pentium 4 -don't remember the speed-, 1g ram, dual boot to separate physical drives, Fdos and Lubuntu 12.1 second my lan: I have four computers operating for the same printer. 1. Intel quad core i5, 4g ram, running Windoze 7 64 bit, printer connected and shared from here. Kodak ESP 3250 2. Gateway 17" laptop running Windoze 7 32bit 3. Asus tablet (small laptop) running Lumbutu 12.1 4. My dual boot system running Fdos and Lubuntu 12.1 The problem: I downloaded c2esp_25c-1_i386.deb, tried to install it using DEBI Package Installer, it loads the files, looks for cups driver and ends with an error: "Dependancy is not satisfiable: libcupsdriver1 (=1.4.0)" What do I do now? Is there some place that I can get the correct cups driver? further information: The Asus tablet was running Ubuntu 12.1 (very slowly and with a few crashes) and could print from the lan printer with no problems. Is there something in Ubuntu that can be loaded into Lubuntu? noobee user hoping for answers, Paul

    Read the article

  • No databases showing in phpMyAdmin

    - by Thein Hla Maw
    My website is hosted in shared hosting service and is working fine with updated news stored in MySQL database. To manage the database of website, I install phpMyAdmin in a sub-folder with the same username and password used in website. When I login to phpMyAdmin, I don't see my database. phpMyAdmin is showing "No databases" in left pane. Is there any thing I need to configure in phpMyAdmin? Edited: This is the settings in config.inc.php. I can login to phpMyAdmin successfully. $cfg['Servers'][$i]['host'] = 'hostname'; $cfg['Servers'][$i]['port'] = ''; $cfg['Servers'][$i]['socket'] = ''; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['user'] = 'dbuser'; $cfg['Servers'][$i]['password'] = 'password';

    Read the article

  • Visual Studio 2008 adding incorrect working folders to TFS Workspace

    - by Bryan Rowe
    I am using Visual Studio 2008 with TFS. I have one workspace set up with one working folder. I map the root source control folder $/ to C:\TFS and get all code. When working on any project under the root, Visual Studio will randomly add incorrectly mapped working folders to my workspace. For example, it might map $/WebProject/ to C:\TFS\WebProject\DataAccess -- where the real files exist at C:\TFS\WebProject. Once it incorrectly adds these working folders, I can no longer open the solution. I am forced to remove the working folders that Visual Studio added and get latest from TFS. Has anyone experienced this? Is there something I can do to avoid running into this?

    Read the article

  • File.mkdir is not working and I can't understand why

    - by gotch4
    Hello, I've this brief snippet: String target = baseFolder.toString() + entryName; target = target.substring(0, target.length() - 1); File targetdir = new File(target); if (!targetdir.mkdirs()) { throw new Exception("Errore nell'estrazione del file zip"); } doesn't mattere if I leave the last char (that is usually a slash). It's done this way to work on both unix and windows. The path is actually obtained from the URI of the base folder. As you can see from baseFolder.toString() (baseFolder is of type URI and is correct). The base folder actually exists. I can't debug this because all I get is true or false from mkdir, no other explanations.The weird thing is that baseFolder is created as well with mkdir and in that case it works. Now I'm under windows. the value of target just before the creation of targetdir is "file:/C:/Users/dario/jCommesse/jCommesseDB" if I cut and paste it (without the last entry) in windows explore it works...

    Read the article

  • How to deploy RESTful Web Service onto IIS

    - by Chris Lee
    Hi all, I'm new to .Net and IIS. I've created a simple RESTful Web Service in VS2008 and .net 3.5 framework using WCF. I've tested it well with F5 debugging in VS(seems it is auto deployed on Windows Service). Now I want to deploy it on my IIS server so that I can use it remotely. But I cannot find any guide for this. I manually deployed my service folder just as an ASP.net site. But seems it does not work(keep showing 401 error). Can anyone tell me how to deploy it to IIS? It contains a simple GET method and I hope it can be accessed by anomynous clients (because the host IP is 192.168..). I have a web.config file, a .dll and .pdb under /bin folder, a Global.asax and .svc file for my service. The IIS server is on the same machine. Thanks a million.

    Read the article

  • SVN hook script conflict

    - by user297303
    I am trying to write a pre-commit hook script that will alter a specific svn-property of a folder/file. The script looks fairly similar to the one that is documented in the svn book. I figured out how to set/change the property of a node and when executing the binding function svn.fs.commit_txn the property of the node actually gets set. But at the moment tortoise always gives me a conflict on the folder I am altering the property. I wrote my script with Python but am new python and hook scripts. Hope someone can give me a clue why I am getting this conflict..

    Read the article

  • Localization with separate Language folders within Views

    - by Adrian
    I'm trying to have specific folders for each language in Views. (I know this isn't the best way of doing it but it has to be this way for now) e.g. /Views/EN/User/Edit.aspx /Views/US/User/Edit.aspx These would both use the same controller and model but have different Views for each language. In my Global.asax.cs I have routes.MapRoute( "Default", // Route name "{language}/{controller}/{action}/{id}", // URL with parameters new { language = "en", controller = "Logon", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { language = @"en|us" } // validation ); This works ok but always points to the same View. If I put the path to the Lanagugage folder it works return View("~/Views/EN/User/Edit.aspx"); but clearly this isn't a very nice way to do it. Is there anyway to get MVC to look in the correct language folder? Thanks and again I know this isn't the best way of doing Localization but I can't use resource files.

    Read the article

  • Forward .html/.htm to .php with .config

    - by PhilipK
    I'm moving a site from my linux hosted server to a client's windows hosted server. The .htaccess file no longer works and I'm told that windows servers use .config . How can I forward all users accessing .html & .htm files to the equivalent .php file. Server Info... OS/Hosting Type: Windows / Shared Hosting .Net Runtime Version: ASP.Net 2.0/3.0/3.5 PHP Version: PHP 5.2 IIS Version: IIS 7.0 Data Center: US Regional EDIT *Hosting provided by GoDaddy Was told by a friend following should work but it has no effect on the site. <configuration> <system.webServer> <handlers> <add name="PHP-FastCGI" verb="*" path="*.html" modules="FastCgiModule" scriptProcessor="c:\php\php-cgi.exe" resourceType="Either" /> </handlers> </system.webServer> </configuration>

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Trying to configure samba share with office server

    - by tomphelps
    Hi, i'm trying to set up fstab to automatically connect to my office shared server. I'm undoubtedly doing something silly here as the username and password and server name work fine in the first code snippet below, just not the second - any help would be appreciated! The following command works as expected... tom@tom-desktop: sudo /usr/bin/smbclient -L Server.local -Uguest Enter guest's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.10] Sharename Type Comment --------- ---- ------- Lacie Disk Disk macosx Server Disk macosx IPC$ IPC IPC Service (Server) ADMIN$ IPC IPC Service (Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.10] Server Comment --------- ------- ACER-9D60040D10 SERVER Server Workgroup Master --------- ------- WORKGROUP ACER-9D60040D10 But when i add the following line to /etc/fstab, i get this error: cifs_mount failed w/return code = -22 //Server.local/Server /media/maguires cifs username=guest,password=password 0 0

    Read the article

  • integrating uppercut and cc.net

    - by deepasun
    I am trying to integrate uppercut and cc.net for getting build and revision... What do I have to put in the codebuild folder? while running cc.net, I'm getting svn folder Unable to execute file [D:\CodeBuild\abc\svn]. The file may not exist or may not be executable. ---> System.ComponentModel.Win32Exception: The system cannot find the file specified Without cc.net integration can we get both build and revision incremented? When will build and revision get incremented?

    Read the article

  • visual studio localhost server can't locate file

    - by mhenk
    i have a very simple web project. just one htm file with some javascript that opens a file test.xml. the xml file is in the same folder as the htm file (and it is part of the project) but when i start the page (f5 or ctrl-f5) it can't find the xml. it starts as http://localhost:50586/main.htm when i do a folder list the test.xml file is right there. opening the page directly in firefox works fine and the script can read and extract data from the xml. how can i convince the development server that the xml is indeed there?? or better still: how can i turn the development server off entirely? for my purposes simply opening the page in my browser is all i need and that should happen from within vs of course.

    Read the article

  • How to Make a Laser Microscope at Home [Video]

    - by Asian Angel
    Earlier this year we shared a video that showed you how to make a microscope projector using a green laser light and a webcam lens. Today we are back with a video demonstrating an easy “at home” method using that same green laser light, a syringe, two glasses, and a blank wall. Note: Video contains some language that may be considered inappropriate. How to make a laser microscope [via Geeks are Sexy] Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • .htaccess redirect with other settings

    - by Zack
    I am using CakePHP framework, it redirect everything to the app folder using .htaccess, and then I set up a WordPress blog in /news/ folder outside of CakePHP, so I don't want everything in /news/ to be redirect, so I modify the .htaccess, and here is the final version: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/news/(.*)$ RewriteRule ^.*$ - [L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> And here is the question: I want to redirect mydomain/super to mydomain/news/super, how to do this without infecting other settings?

    Read the article

  • netbeans ftp configuration

    - by sunwukung
    Hi, I've set up my FTP connection for my project, but when it uploads the file, it adds a directory named after the project to the uploads (which means it isn't going to the right folder). i.e. initial directory set to '/httpdocs'; no upload directory specified. I upload a file from my local folder: project name/library/script.php I want it to go here: FTP/httpdocs/library/script.php but it's going here: FTP/httpdocs/PROJECTNAME/library/script.php Can anyone help me get this configured correctly?

    Read the article

  • Tri-Boot Win 7 64+Ubuntu 12.04+BackTrack 5

    - by Volchonoc
    I'd like to know what is the best procedure for doing a Tri-boot? I don't want to re-size windows partition I want to re-install it from scratch. I heard that it's better to install windows first, but will windows allow me to create the right partition structure? And what i the best structure? should I create a primary for windows and extended for everything else? If so what should my logicals be: 1)Ubuntu+2)SWAP(shared)+3)BackTrack root+4)BackTrack home, or should I just make 4 primary 1)win+2)Ubuntu+3)BackTrack+4)SWAP. And what are the formats I should choose for Linux partitions? I would appreciate any info on this topic Thank You

    Read the article

  • Extremely slow transfer speed ubuntu -> Windows

    - by Hailwood
    I have two laptops, One is running Ubuntu 12.04 (EXT4) the other is running Windows 7 (NTFS). I am copying over 40gb of data (one file) from the Ubuntu laptop to the Windows Laptop. (Browse the shared folder on Ubuntu using Windows copy/paste) But I am getting transfer speeds topping out at ~700kb/s Surely this is not right. I am transferring via wifi on both laptops. My download speeds can reach 7-8mb/s on both laptops, so I know it is not the wifi cards or the router topping out. wlan0 Link encap:Ethernet HWaddr 84:4b:f5:db:b4:85 inet addr:192.168.1.66 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::864b:f5ff:fedb:b485/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11941185 errors:0 dropped:0 overruns:0 frame:0 TX packets:11306693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10087111370 (10.0 GB) TX bytes:7843524888 (7.8 GB)

    Read the article

  • How to access network folders from within windows 7 virtualbox

    - by musher
    At work we have fileshare storages accessible via the network. I'm not super familiar with this kind of networking, so I'm not 100% sure how to word this question/give enough details right away so please ask for any clarifications. That being said: Host: Xubuntu 14.04 Guest: Windows 7 enterprise Via thunar I can access these network drives fine by going to Browse Network > Windows Network > Server I want to use The name given in the address bar is: smb://SERVER-NAME So my question now, is how do I go about accessing these folders via my win 7 machine in virtualbox? I've searched but all I get is people who can't figure out shared folders between host/guest (which I've got working)

    Read the article

  • Executing php with crontab

    - by Stefan Konno
    Hi everyone. I'm trying to run a php-script on a scheduled basis. So I'd thought crontab was a good idea. The server I'm using is on Linux which I'm not that familiar with. So the problem I'm having is, I don't know how make the script executable from php. I need to reference the script, or put it into a folder that can run php from the command line. So I don't know what path to give my crontab, for example: 5 * * * * var/www/some/path/script.php I found some vague information about this php executable being found in /usr/bin/php But I can't find any php file in there, maybe I don't have it installed? My php5 and apache installation is in: /etc/php5 So my question becomes, is there anyway to execute a php-script with crontab in any other folder, or do I just lack the php executable in usr/bin/php?

    Read the article

  • Exporting query results to a file on the fly

    - by ercan
    Hi all, I need to export the results of a query to a csv file in an FTP folder. Is it possible to achieve this within a stored procedure? If yes, comes yet another constraint: can I achieve this without sysadmin privileges, aka without using xp_cmdshell + BCP utility? If no to 2., does the caller have to have sysadmin privileges or would it suffice if the SP owner has sysadmin privileges? Here are some more details to the problem: The SP must export and transfer the file on the fly and raise error if something went wrong. The caller must get a response immediately, i.e. in case of no error, he can assume that the results are successfully transferred to the folder. Therefore, a DTS/SSIS job that runs every N minutes is not an option. I know the problem smells like I will have to do this at application level, but I would be more than happy if all those stuff could be done from T-SQL.

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >