Search Results

Search found 60836 results on 2434 pages for 'system io directory'.

Page 553/2434 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • Creating java package on ubuntu?

    - by Gaurav_Java
    I am new to java. Here I am trying to create java package. And try to compile it from another directory . But there is an error like bash: /home/gaurav/Desktop/package2/B.java: Permission denied Here is fy first code and directory is /home/Desktop/package/A.java package package1; public class A { interface A1 { void show(); void display(); } } class B extends A { public void show() { System.out.println("This is show method()"); } public void display() { System.out.println("this is Display metthod()"); } } For compilation I did this command it's works fine. pwd is /home/gaurav javac /home/gaurav/Desktop/package/A.java When I try to compile B.java which is in my Other drive /media/gaurav/iPlay/package/B.java package package2; class B { public static void main(String args[]) { System.out.println("Reached in Main method of B"); package1.A Object = new A(); } } I tired this vommand (grom previous working directory) javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java Error Comes javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java javac: no source files Usage: javac <options> <source files> use -help for a list of possible options bash: /media/gaurav/iPlay/package/B.java: Permission denied What i am doing wrong? Please it my assignment I am not able to move further without this. I changed permissions.

    Read the article

  • How to avoid tilde ~ in Bash prompt?

    - by Jirka
    Hello! I have set my prompt in bash in a such way that I can use it directly in scp command: My current PS1 string: PS1="\h:\w\n$" And the prompt looks like this: lnx-hladky:/tmp/plugtmp $ What I don't like at all is the fact that $HOME directory is displayed as tilde. Can this be avoided? It's causing problems when switching between different users. Example: lnx-hladky:~/DOC $ Documentation says: \w : the current working directory, with $HOME abbreviated with a tilde \W: the basename of the current working directory, with $HOME abbreviated with a tilde Is there any possibility to avoid $HOME being abbreviated with a tilde? I have found one way around but I feel like it's overcomplicated: PROMPT_COMMAND='echo -ne "\e[4;35m$(date +%T)\e[24m$(whoami)@$(hostname):$(pwd)\e[m\n"' PS1=$ Can anyone propose a better solution? I have a feeling it's not quite OK to run so many commands just to get prompt. (date,whoami,hostname,pwd). Thanks a lot! Jirka

    Read the article

  • Applying languages / locale selectively: is it possible?

    - by Aron Rotteveel
    I am a Dutch user and prefer the my local date & time format, system wide. I have no trouble speaking or understanding English and find it very useful to have the rest of my system configured in English to make my life easier when I need to Google a term, for example. Is it possible to apply the a local date/time/currency/etc. format to the system, while maintaining English menu & dialog captions? EDIT: output from locale and posted screens of current settings: LANG=en_US.utf8 LANGUAGE=en LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL=

    Read the article

  • ADF Seeded Customizations in JDeveloper 11.1.2.1

    - by Dmitry Nefedkin
    For the ADF training I needed a demo application that shows ADF seeded customizations functionality. I’m using the latest JDeveloper 11.1.2.1, so I decided to download the “Customizing and Personalizing an ADF Application” completed tutorial application available here I’ve downloaded and unzipped the CustomizeApp.zip and opened the CustomizeApp.jws in the JDeveloper 11.1.2.1 using the Customization Role. The result was the following: MDS-00036 “Cannot instantiate the class oracle.model.mycompany.SiteCC”. I thought: “OK, that’s because SiteCC class is not accessible to JDeveloper classloader, I should jar it and put to the <JDEVELOPER_HOME> \jdev\lib\patches like I did in JDeveloper 11.1.1.5 and ealier”.  No way, it JDeveloper 11.1.2 we do not have this patches directory at all! It seems that is because of the new architecture of the JDeveloper plugins based on OSGi.   I looked through the tutorial and have not found any step related to the jar–ing the SiteCC class and moving it to the specific directory.  So, JDeveloper 11.1.2  is smart enough to find my customization class and add it to the classpath without any specific actions from my side.  But why am I getting this “cannot instantiate the class” error?I’ve checked at the the full path to my CustomizeApp.jws  - c:\temp\ADF personalizations\CustomizeApp\CustomizeApp.jws  and noticed the space in the name of the directory.  Was it the root cause of the issue?  Yes!  I’ve renamed the ADF personalizations folder to pers, opened the c:\temp\pers\CustomizeApp\CustomizeApp.jws,  and received the expected behaviour: So, be aware of the spaces in the paths when working with JDeveloper…

    Read the article

  • Is there a way to allow administrators to change or reset user passwords?

    - by Jon Seigel
    We have a custom MembershipProvider implementation using form-based authentication (FBA) under Sharepoint 2007. I've searched high and low on Google, but only found: Active directory and FBA implementations to allow users to change their own password Active directory instructions (including video!) for administrators to change other users' passwords Have we missed an option to enable the latter under FBA? Should this work by default and is the MembershipProvider misbehaving? The procedure to do this as under active directory would be ideal, but the "Change Password" link does not appear in the Edit User screen. We verified that the logged-in user is a site collection administrator.

    Read the article

  • How do I add a network printer in Ubuntu 12.04?

    - by Ricky Robinson
    I know the name and the IP address of a network printer, but I can't seem to be able to search by IP address or name. Ubuntu developers love to move things around to make it difficult for users, so now with Ubuntu 12.04 I can only go on Application -> System Tools -> System Settings -> Printers, click on Network and a list of printers appears. Too bad the one I want to add isn't there. How do I do it? Here it suggests System -> Administration -> Printing, which simply doesn't exist.

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Blank screen after installing nvidia restricted driver

    - by LaMinifalda
    I have a new machine with a MSI N560GTX Ti Twin Frozr II/OC graphic card and MSI PH67A-C43 (B3) main board. If i install the current nvidia restricted driver and reboot the machine on Natty (64-bit), then i only get a black screen after reboot and my system does not respond. I can´t see the login screen. On nvidia web page i saw that the current driver is 270.41.06. Is that driver used as current driver? Btw, i am an ubuntu/linux beginner and therefore not very familiar with ubuntu. What can i do to solve the black screen problem? EDIT: Setting the nomodeset parameter does not solve the problem. After ubuntu start, first i see the ubuntu logo, then strange pixels and at the end the black screen. HELP! EDIT2: Thank you, but setting the "video=vesa:off gfxpayload=text" parameters do no solve the problem too. Same result as in last edit. HELP. I would like to see Unity. This is my grub: GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="video=vesa:off gfxpayload=text nomodeset quiet splash" GRUB_CMDLINE_LINUX=" vga=794" EDIT3: I dont know if it is important. If this edit is unnecessary and helpless I will delete it. There are some log files (Xorg.0.log - Xorg.4.log). I dont know how these log files relate to each other. Please, check the errors listed below. In Xorg.1.log I see the following error: [ 20.603] (EE) Failed to initialize GLX extension (ComIatible NVIDIA X driver not found) In Xorg.2.log I see the following error: [ 25.971] (II) Loading /usr/lib/xorg/modules/libfb.so [ 25.971] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 25.971] (==) NVIDIA(0): RGB weight 888 [ 25.971] (==) NVIDIA(0): Default visual is TrueColor [ 25.971] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 26.077] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [ 26.078] (EE) NVIDIA(0): check your system's kernel log for additional error [ 26.078] (EE) NVIDIA(0): messages and refer to Chapter 8: Common Problems in the [ 26.078] (EE) NVIDIA(0): README for additional information. [ 26.078] (EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device! [ 26.078] (II) UnloadModule: "nvidia" [ 26.078] (II) Unloading nvidia [ 26.078] (II) UnloadModule: "wfb" [ 26.078] (II) Unloading wfb [ 26.078] (II) UnloadModule: "fb" [ 26.078] (II) Unloading fb [ 26.078] (EE) Screen(s) found, but none have a usable configuration. [ 26.078] Fatal server error: [ 26.078] no screens found [ 26.078] Please consult the The X.Org Found [...] In Xorg.4.log I see the following errors: [ 15.437] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 15.437] (==) NVIDIA(0): RGB weight 888 [ 15.437] (==) NVIDIA(0): Default visual is TrueColor [ 15.437] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 15.703] (II) NVIDIA(0): NVIDIA GPU GeForce GTX 560 Ti (GF114) at PCI:1:0:0 (GPU-0) [ 15.703] (--) NVIDIA(0): Memory: 1048576 kBytes [ 15.703] (--) NVIDIA(0): VideoBIOS: 70.24.11.00.00 [ 15.703] (II) NVIDIA(0): Detected PCI Express Link width: 16X [ 15.703] (--) NVIDIA(0): Interlaced video modes are supported on this GPU [ 15.703] (--) NVIDIA(0): Connected display device(s) on GeForce GTX 560 Ti at [ 15.703] (--) NVIDIA(0): PCI:1:0:0 [ 15.703] (--) NVIDIA(0): none [ 15.706] (EE) NVIDIA(0): No display devices found for this X screen. [ 15.943] (II) UnloadModule: "nvidia" [ 15.943] (II) Unloading nvidia [ 15.943] (II) UnloadModule: "wfb" [ 15.943] (II) Unloading wfb [ 15.943] (II) UnloadModule: "fb" [ 15.943] (II) Unloading fb [ 15.943] (EE) Screen(s) found, but none have a usable configuration. [ 15.943] Fatal server error: [ 15.943] no screens found EDIT4 There was a file /etc/X11/xorg.conf. As fossfreedom suggested I executed sudo mv /etc/X11/xorg.conf /etc/X11/xorg.conf.backup However, there is still the black screen after reboot. EDIT5 Neutro's advice (reinstalling the headers) did not solve the problem, too. :-( Any further help is appreciated! EDIT6 I just installed driver 173.xxx. After reboot the system shows me only "Checking battery state". Just for information. I will google the problem, but help is also appreciated! ;-) EDIT7 When using the free driver (Ubuntu says that the free driver is in use and activated), Xorg.0.log shows the following errors: [ 9.267] (II) LoadModule: "nouveau" [ 9.267] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 9.267] (II) Module nouveau: vendor="X.Org Foundation" [ 9.267] compiled for 1.10.0, module version = 0.0.16 [ 9.267] Module class: X.Org Video Driver [ 9.267] ABI class: X.Org Video Driver, version 10.0 [ 9.267] (II) LoadModule: "nv" [ 9.267] (WW) Warning, couldn't open module nv [ 9.267] (II) UnloadModule: "nv" [ 9.267] (II) Unloading nv [ 9.267] (EE) Failed to load module "nv" (module does not exist, 0) [ 9.267] (II) LoadModule: "vesa" [...] [ 9.399] drmOpenDevice: node name is /dev/dri/card14 [ 9.402] drmOpenDevice: node name is /dev/dri/card15 [ 9.406] (EE) [drm] failed to open device [ 9.406] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 9.406] (WW) Falling back to old probe method for fbdev [ 9.406] (II) Loading sub module "fbdevhw" [ 9.406] (II) LoadModule: "fbdevhw" EDIT8 In the meanwhile i tried to install WIN7 64 bit on my machine. As a result i got a BSOD after installing the nvidia driver. :-) For this reason i sent my new machine back to the hardware reseller. I will inform you as soon as i have a new system. Thank you all for the great help and support. EDIT9 In the meanwhile I have a complete new system with "only" a MSI N460GTX Hawk, but more RAM. The system works perfect. :-) The original N560GTX had a hardware defect. Is is possible to close this question? THX!

    Read the article

  • How to make FileZilla open all the required files with one click

    - by Omar Tariq
    Is there any way of configuring FileZilla so that I can open all the files on a server that I use to edit with just one click. For example if the files are like this: /home/abc/def/one.txt /home/abc/def/yet/another/directory/two.txt /home/abc/def/ghi/yet/another/directory/three.txt Then it is very time-consuming to navigate through each directory and open the required files. These are only 3 files but what if we have around 10 to 20 files? Yes, copying the path of the directories is one thing. But something that is built-in so that I can just click a button like open all the required files of this connection and it opens all the files in the editor (as set in FileZilla preferences) then that would be great!

    Read the article

  • Force apt to remove all emacs*

    - by wishi
    Hi! I have a bug-problem with the apt-packages of emacs: >>Error occurred processing debian-ispell.el: File error (("Opening input file" "no such file or directory" "/usr/share/emacs23/site-lisp/dictionaries-common/debian-ispell.el")) >>Error occurred processing ispell.el: File error (("Opening input file" "no such file or directory" "/usr/share/emacs23/site-lisp/dictionaries-common/ispell.el")) >>Error occurred processing flyspell.el: File error (("Opening input file" "no such file or directory" "/usr/share/emacs23/site-lisp/dictionaries-common/flyspell.el")) emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 30. dpkg: error processing emacs23-lucid (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of emacs: emacs depends on emacs23 | emacs23-lucid | emacs23-nox; however: Package emacs23 is not installed. Package emacs23-lucid which provides emacs23 is not configured yet. Package emacs23-nox which provides emacs23 is not installed. Package emacs23-lucid is not configured yet. Package emacs23-nox is not installed. dpkg: error processing emacs (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: emacs23-lucid emacs E: Sub-process /usr/bin/dpkg returned an error code (1) In fact I would be satisfied with just emacs23-nox, a couple of plugins - from apt. But I can neither --purge nor --purge reinstall, nor remove the packages. It always processes until this certain bug. I did some google-searching, found some stuff on Launchpad suggesting: sudo apt-get install --reinstall --purge emacsen-common But this is the same... so I hope there a way to tell app to just remove everything releated to emacs, and to start from scratch again? Thanks, Marius

    Read the article

  • NoSQL as file meta database

    - by fga
    I am trying to implement a virtual file system structure in front of an object storage (Openstack). For availability reasons we initially chose Cassandra, however while designing file system data model, it looked like a tree structure similar to a relational model. Here is the dilemma for availability and partition tolerance we need NoSQL, but our data model is relational. The intended file system must be able to handle filtered search based on date, name etc. as fast as possible. So what path should i take? Stick to relational with some indexing mechanism backed by 3 rd tools like Apache Solr or dig deeper into NoSQL and find a suitable model and database satisfying the model? P.S: Currently from NoSQL Cassandra or MongoDB are choices proposed by my colleagues.

    Read the article

  • Do immutable objects and DDD go together?

    - by SnOrfus
    Consider a system that uses DDD (as well: any system that uses an ORM). The point of any system realistically, in nearly every use case, will be to manipulate those domain objects. Otherwise there's no real effect or purpose. Modifying an immutable object will cause it to generate a new record after the object is persisted which creates massive bloat in the datasource (unless you delete previous records after modifications). I can see the benefit of using immutable objects, but in this sense, I can't ever see a useful case for using immutable objects. Is this wrong?

    Read the article

  • Avoid unwanted path in Zip file

    - by jerwood
    I'm making a shell script to package some files. I'm zipping a directory like this: zip -r /Users/me/development/something/out.zip /Users/me/development/something/folder/ The problem is that the resultant out.zip archive has the entire file path in it. That is, when unzipped, it will have the whole "/Users/me/development/anotherthing/" path in it. Is it possible to avoid these deep paths when putting a directory into an archive? When I run zip from inside the target directory, I don't have this problem. zip -r out.zip ./folder/ In this case, I don't get all the junk. However, the script in question will be called from wherever. FWIW, I'm using bash on Mac OS X 10.6.

    Read the article

  • Trouble getting FTP login to work in IIS6

    - by Frank Rosario
    Hello all, I'm trying to setup an FTP site for one of my clients to pickup files from us using IIS6. I've created the FTP site, have set to not isolate users (not necessary as FTP will be read only with authentication). Here's the problem. The FTP is to be password protected, so I turned of anonymous access on the FTP site. I then created a ftpuser account on the machine, and gave it read and browse directory permissions on the ftp's root directory. However, when I go to test the ftpuser login, I get a 530 "ftpuser cannot login" error. However, if I browse to same directory over HTTP (anonymous access turned off as well) and enter the ftpuser login info, I can download files and browse directories successfully. Why is the ftpuser working over HTTP but not FTP? Shouldn't I be able to login over FTP with the ftpuser login information I just created? Thanks in advance, - Frank

    Read the article

  • Help with Rewrite rules.

    - by Kyle
    I was wondering what a rewrite statement that looks for this situation. I want to have multiple users on my server. Each user can have 'VirtualDocumentRoot' like sites in his directory. For example, they just make a directory like 'example.com' in their home directory, and it's hosted. The problem is I don't know if VirtualDocumentRoot can do this, or if it would take a rewrite rule that looks in all the users folders for a domain. Can anybody help me?

    Read the article

  • How to create public html (apache2) with LDAP authentication?

    - by borjamf
    Im running Apache2 on Ubuntu 12.04 Server because I want to create a home directory for each ldap user. I'm using LDAP for authentication and it's working ok. Also I've done some tests with LDAP module for Apache2 and it's working ok. The problem with this LDAP authentication is that any success login can access to ~user/public_html, even if the user is not the owner of that home. I dont know how to control that, for example, userldap2 access to userldap1/public_html. I want that only the userldap1 access to userldap1. Could anybody tell me how to control that with LDAP authentication? I hope that you'll understand me. My config (auth_ldap.conf) <Directory /home/disco2/*/public_html> AuthName "Authentication" AuthType basic AuthBasicProvider ldap AuthzLDAPAuthoritative off AuthLDAPURL ldap://prueba.borja/dc=prueba,dc=borja?uid? Require ldap-filter objectClass=posixAccount </Directory>

    Read the article

  • B2B communication using IBM MQ

    - by Dheeraj Kumar M
    Oracle B2B 11g, provides the out-of-the box ability to connect to IBM MQ to exchange the message. This is support is provided via JMS offering of Oracle B2B. This is an addition to the stack of existing communication capabilities of B2B with trading partners. There are 2 ways of connecting to IBM MQ using B2B 1. Credential based connectivity 2. .bindings based connectivity As a pre-requisite to connect to IBM MQ, it is required to provide the following libraries in classpath: a. com.ibm.mqjms.jar b. dhbcore.jar c. com.ibm.mq.jar d. com.ibm.mq.jmqi.jar e. mqcontext.jar f. com.ibm.mq.pcf.jar g. com.ibm.mq.commonservices.jar h. com.ibm.mq.headers.jar i. fscontext.jar j. jms.jar Add the above jars into domain library directory and the directory usually located at $DOMAIN_DIR/lib. The jars located in this($DOMAIN_DIR/lib) directory will be picked up and added dynamically to the end of the server classpath at server startup. For eg. /user_projects/domains//lib/ Alternatively the above jar’s can also be added as part of the setDomainEnv.sh Credential based connectivity : Outbound: : Configure the trading partner delivery channel for using "Generic JMS" protocol Inbound: : Configure the internal delivery channel for using "Generic JMS" protocol with the following details: Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=<host>:<QM Listen port>/<MQ Channel Name>; User NameMQ User Name passwordMQ password .bindings based connectivity As a pre-requisite, get/generate the .bindings file in MQServer. This can be done by MQ Administrator Set the following values in the respective delivery channel for outbound / inbound Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=file:///<location of .bindings file>;

    Read the article

  • Windows Server 2003 Synchronize Not Sticking

    - by lkessler
    We have a Windows Server 2003. It had Raid running on 2 disks. One disk failed and the Raid Controller failed. We replaced the disk and controller and restored everything. No data was lost. The users of that server found that there were a number of directories that appeared empty. We found that from their machine, we could right-click on the directory and select "Synchronize" and the files in the directory would now be visible to them. However, when opening Internet Explorer and browsing the web and ftp'ing to a web site, the files in the directory would vanish. We would have to "Synchronize" them again to get them to reappear. What is going on to cause this need to Synchronize and then re-Synchronize again? What do we need to do to fix this so that the directories are permanently visible?

    Read the article

  • How do I force .htaccess authorization to occur over ssl?

    - by kenja
    I'm trying to force a particular directory to require only allowed IPs and a valid username/password through basic authorization. To ensure that the username/password are sent in encrypted form, I want the directory to also force SSL use. Here is what I have in my .htaccess file: # Force HTTPS-Connection RewriteEngine On RewriteCond %{SERVER_PORT} !^443$ RewriteRule (.*) https://www.mywebsite.com%{REQUEST_URI} [R,L] ## password begin ## AuthName "Restricted Access" AuthUserFile /var/www/admin/.htpasswd AuthType Basic Require valid-user Order deny,allow Deny from all Allow from 79.1.231.151 62.123.134.83 Satisfy All Unfortunately, when I access that directory using http protocol, it is asking for the password before it redirects the page to the secure version. This means the password is sent unencrypted. What am I doing wrong? Is there a way to do this?

    Read the article

  • Selling On Demand

    - by andrea.mulder
    In May 2010, eSilicon management began evaluating providers for a new CRM system - vetting a variety of CRM offerings. Using a rating system that scored vendors according to marketing, sales, services, features, usability, implementation time, and cost, the team chose Oracle CRM On Demand for the project. "Overall, Oracle CRM On Demand was the best system that was able to address all our pain points," says Janet Ang, senior applications developer and project manager of the CRM implementation at eSilicon. Read Selling On Demand, a feature article in the February 2011 issue of Profit Magazine, and find out how eSilicon achieved:Easy Implementation and Adoption Sales and Management Benefits High Productivity for Tech

    Read the article

  • I Installed Ubuntu 12.04 on a dell Inspiron 1501 along side windows vista using the windows installer but it wont boot into Ubuntu

    - by Nicholas
    I Installed Ubuntu 12.04 on a dell Inspiron 1501 with an AMD 64 along side windows vista using the windows installer but it wont boot into Ubuntu. It shows that Ubuntu is on the system when my computer boots up but when I select it to load it goes into a black screen and displays some error messages and tells me that the is no operating system installed. this is the error that i get: Try (hdo, 0):FAT16:no WUBILDR try (hdo, 1)NTFS: error: "Prefix" is not set. symbol not found:'grub_file_get_device_name' Aborted. Broadcom UNDI PXE-2.1 V2-1.0 copyright (c) 2000-2006 Broadcom corporation copyright (c) 1997-2000 Intel corporation All rights reserved PXE-EC8:PXE structure was not found in UNDI driver code segment. PXE-M0F Broadcom PXE Rom Operating system not found How can I fix this? I have tryed re-installing it but i get the same error.

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

  • Unit testing time-bound code

    - by maasg
    I'm currently working on an application that does a lot of time-bound operations. That is, based on long now = System.currentTimeMillis();, and combined with an scheduler, it will calculate periods of time that parametrize the execution of some operations. e.g.: public void execute(...) { // executed by an scheduler each x minutes final int now = (int) TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis()); final int alignedTime = now - now % getFrequency() ; final int startTime = alignedTime - 2 * getFrequency(); final int endTimeSecs = alignedTime - getFrequency(); uploadData(target, startTime, endTimeSecs); } Most parts of the application are unit-tested independently of time (in this case, uploadData has a natural unit test), but I was wondering about best practices for testing time-bound parts that rely on System.currentTimeMillis() ?

    Read the article

  • Mimic NTFS "Modify" Permissions on an ext3 acl enabled filesystem in linux?

    - by bobinabottle
    I am migrating our file share from Windows Server to Samba on Linux, and the only hurdle I have at the moment is the acl's. Currently we have a number of directories that use the "Modify" permission on NTFS, so users can write to a directory, but once the file is written it cannot be modified. On Linux, I had the idea that I would set an ACL for the directory to have read/write access, but have a default ACL associated with read only access. Is this possible? I'm not quite sure how to set a default ACL that differs from the parent directory. Thanks!

    Read the article

  • Need Help Changing Owner of External HArd Drive

    - by Thomas Ballew
    My understanding of code is about zero. I can open a terminal window, and type commands that are given to me, but that's about it. If someone can help me with this question, and explain at a level I'm likely to understand, thanks. If not, thanks anyway. I have an external hard drive with two partitions. I bought this drive when my operating system was Apple, 10.5 or so, and it was formatted as HFS+ with that system. Now, connecting the HD to my Linux system, I can read files, but I have about 1.5 TB of space that I can't use, because I am not the owner of the file, so can't write to the HD. Short of reformatting the HD, is there a way for me to set the permissions for the HD so I can write to it? Again, thank you.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >