Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 778/1338 | < Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >

  • Is drupal going to improve these small features ?

    - by Patrick
    Is drupal going to improve the following features ? 1) multi-line description for CCK fields such as images (at the moment I can only write in a line but a text-area would be better 2) thumbnails upload for CCK Video fields (so I can upload a thumbnails for each video if I cannot install ffmpeg on the server) 3) merge CCK Images and Videos in a single group. So my customer can order them in the same list by dragging and dropping them, and in my front-end I have them ordered in the same list. This would be very useful for me. Do you know if I can get some of theme with some modules maybe ? thanks

    Read the article

  • Forcing exact hostname match in IIS

    - by iis_newbie
    I am looking how to force an exact hostname match within IIS when using https. For instance, I want "https://works.mysite.com/resource" to be ok, but "https://noworks.mysite.com/resource" to return 404 (assuming they both resolve to the same IP). IIUC, the default behavior of IIS when going to "https://noworks.mysite.com/resource" is to get a cert warning, if the user presses continue, the user is able to access the URL. I was able to do this by generating a *.mysite.com SSL cert, and then specify the hostname within the bindings in IIS, but without the * in the beginning, the hostname field is disabled and blank. Am I missing something simple here?

    Read the article

  • WordPress mod_rewrite redirect specific folders

    - by Ps Cjef
    As a new user, I'm not allowed to post more than two hyperlinks here. So I have added a space after every http (ignore them and read as full URLs). System: Debian Etch, Apache 2.2 I have a WordPress instance with multiple blogs. I would like to redirect some of the folders based on the year and month, while leaving other folders go to the actual locations. Example: I have archives for a few years, like 2010, 2011 and 2012: http ://mydomain.com/wordpress/myblog/2010/02 http ://mydomain.com/wordpress/myblog/2011/01 http ://mydomain.com/wordpress/myblog/2012/01 I would like to redirect all 2010 and 2011 posts to another blog with the same folder structure: http ://mydomain.com/wordpress/myotherblog/2010/02 http ://mydomain.com/wordpress/myotherblog/2011/01 and so on. I would like to have 2012 and beyond to go to the actual site (http ://mydomain.com/wordpress/myblog/2012/01). I tried mod_rewrite with the following, one rule at a time to test redirection for just one year (and to expand later for other years), and none of them worked! * RewriteEngine is already on since there are some default WordPress rewrites. * RewriteBase is set to http://mydomain.com/wordpress/ . * I put my rule before all the other default WordPress rules are processed. Didn't work solution #1 RedirectMatch 301 /myblog/2010/(.*) /myotherblog/2010/$1 Didn't work solution #2 RewriteRule /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 [R=301] Didn't work solution #3 RedirectPermanent /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 I've also tried the above rules with and without a fully qualified URL for the new location. The rewrite log, with log level set to 9, did not provide any useful information. It shows that it looks at the pattern specified against the URL (as mentioned in the rule), but finally what happens is a passthrough to http ://mydomain.com/myblog/ for all URLs or a 500 Internal Server Error. Any ideas on where I could be going wrong or any alternative solutions?

    Read the article

  • How to change defaulp pdf viewer for all users in command line

    - by dodecaplex
    I'm using Debian squeeze with Gnome Desktop for all my users. I have a group of machines to set up so that all users should use xpdf as a default viewer (rather than evince). I want this set up to be done by command line (even better, using puppet). I know about xpg-mime command, but the man page says that the default command should not be used as root. I could manually tweek the /etc/gnome/defaults.list files, but I'm looking for a single command I could run to perform the setting without an editor interaction. Any idea ?

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • OSB, Service Callouts and OQL

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This series will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. The second section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout and using Heap Dump and OQL to identify the related Proxies and Business services involved. The final section of the series will focus on the corrective action to avoid Service Callout related OSB serer hangs. Before we dive into the solution, we need to briefly discus about Work Managers in WLS. Please refer to the blog posting for more details.

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

  • JiglibX addition to existing project questions

    - by SomeXnaChump
    Got a very simple existing project, that basically contains a lot of cubes. Now I am wanting to add a physics system to it and JiglibX seemed like the simplest one with some tutorials out there. My main problem is that the physics don't seem to be working how I imagined, I expected my tower of cubes to come crashing down, but they dont seem to do anything. I think my problem is that my cubes do not inherit DrawableGameComponent, they are managed by a world object that will update and render them. So they are at no point put into the games component list. I am not sure if this means that JiglibX will not be able to interact with them as in all the tutorials there are no explicit calls to add the Body objects to the physics system, so I can only presume that they are using a static/singleton under the hood which automatically hooks in all things, or they use the game objects component list somehow. I also noticed that in alot of the tutorials they use the following when setting up the physics system: float timeStep = (float)gameTime.ElapsedGameTime.Ticks / TimeSpan.TicksPerSecond; PhysicsSystem.CurrentPhysicsSystem.Integrate(timeStep); Would it not be better to keep a local instance of the created PhysicsSystem object and just call myPhysicsSystem.Integrate(timeStep)?

    Read the article

  • In an online questionnaire, what is a best way to design a database for keeping track of users all attempts?

    - by user1990525
    We have a web app where users can take online exams. Exam admin will create a questionnaire. A questionnaire can have many Questions. Each question is a multiple choice question (MCQ). Lets say an admin creates a questionnaire with 10 questions. Users attempt those questions. Now, unlike real exams users can attempt single questionnaire multiple times. And we have to keep track of his all attempts. e.g. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 1 June 2013 1 1 1 2 b 1 June 2013 1 1 1 1 c 2 June 2013 2 1 1 2 d 2 June 2013 2 Now it can also happen that after user has attempted same questionnare twice, admin can delete a question from same questionnaire, but users attempt history should still have reference to that so that user can see his that question in his attempt history in spite of admin deleting that question. If user now attempts this changed questionnaire he should see only 1 question. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 3 June 2013 3 Also, after this user modified some part of question, users attempt history should show question before modification while any new attempt should show modified question. How do we manage this at the database level? My first gut feeling was that, For deletes, do not do physical delete, just make a question inactive so that history can still keep track of users attempt. For modifications, create versions for questions and each new attempt refres to latest version of each question and history keeping reference to version of question at attempt time.

    Read the article

  • Why can't WARs share session info?

    - by rvcoutinho
    I have seen several developers looking for a solution for this problem: accessing session information from a different WAR (even when inside the same EAR) - here are some samples: Any way to share session state between different applications in tomcat?, Access session of another web application, different WAR files, shared resources, Tomcat: How to share data between two applications?, What does the crossContext attribute do in Tomcat? Does it enable session sharing? and so on... From all I have searched, there are some specific solutions depending on the container, but it is somehow 'contrary to the specification'. I have also looked through Java EE specification without any luck on finding an answer. Some developers talk about coupling between web applications, but I tend to disagree. What is the reason one would keep WARs inside the same EAR if not coupling? EJBs, for instance, can be accessed locally (even if inside another EJB JAR within the same EAR). More specifically, one of my WARs handles authentication and authorization, and I would like to share this information with other WARs (in the same EAR). I have managed to work around similar problems before by packaging WARs as JARs and putting them into a singular WAR project (WEB-INF/lib). Yet I do not like this solution (it requires a huge effort on servlet naming and so on). And no solution has answered the first (and most important) question: Why can't WARs share session information?

    Read the article

  • PHP processes run one at a time, always taking 100% of one core

    - by Derek Kurth
    We have seven websites written in PHP running on a Windows 2008 server with IIS 7.5. They are all very slow right now. When I look in Task Manager, I see around 10 php-cgi.exe processes, and they are all taking 0% of the CPU, except one, which is taking 25%. It's a quad-core server, so it's taking 100% of one core. If I watch for a few seconds, the process taking 25% will go to 0%, and a different php-cgi.exe process will jump to 25%. So all the php-cgi.exe processes are just lined up, waiting on a single core, and each process uses 100% of the processor when it can. Each of the 7 sites is in its own application pool in IIS, and we're using FastCGI. The PHP version is 5.3. Any ideas? Thanks!

    Read the article

  • How would you model an objects representing different phases of an entity life cycle?

    - by Ophir Yoktan
    I believe the scenario is common mostly in business workflows - for example: loan management the process starts with a loan application, then there's the loan offer, the 'live' loan, and maybe also finished loans. all these objects are related, and share many fields all these objects have also many fields that are unique for each entity the variety of objects maybe large, and the transformation between the may not be linear (for example: a single loan application may end up as several loans of different types) How would you model this? some options: an entity for each type, each containing the relevant fields (possibly grouping related fields as sub entities) - leads to duplication of data. an entity for each object, but instead of duplicating data, each object has a reference to it's predecessor (the loan doesn't contain the loaner details, but a reference to the loan application) - this causes coupling between the object structure, and the way it was created. if we change the loan application, it shouldn't effect the structure of the loan entity. one large entity, with fields for the whole life cycle - this can create 'mega objects' with many fields. it also doesn't work well when there's a one to many or many to many relation between the phases.

    Read the article

  • Stop Windows 7 from installing critical updates

    - by Rico
    I have disabled Windows Update on Windows 7, but it still downloads and installs critical updates. As with Windows XP, these updates occasionally create problems. But in XP, once I disabled automatic updates, the OS did not override my settings and continue to download and install critical updates. After the most recent critical update to Windows 7, every time I boot up, the USB devices connected to a single port through 4-way hub don't install. I have to unplug the hub from the computer then plug it back in. Then the devices install. I can use System Recovery to solve the immediate problem, but I'd like to disable automatic updates entirely. Why does Windows 7 not respect my settings like Windows XP did?

    Read the article

  • How to batch convert video files on OSX for AppleTV2 / iPhone4?

    - by Luke404
    I'd like to have a solution to batch convert video files to a format suitable for the AppleTV2, iPad2, iPhone4, while at the same time preserving as much quality as possible; I want a single output file that will play on both devices and also good for consumption by other Mac software (eg. Aperture, iMovie, iTunes). Batch processing is a requirement since I'm gonna convert many many files from different sources (mainly lots of videos captured by compact digital cameras, cell phones, and so on). I'm looking into ffmpeg and MEncoder (both installed via MacPorts), but I can't seem to find a suitable preset for libx264 even if everyone out there is talking about them. A different approach involving different software would be ok too as long as I can script it somehow and run it on a whole directory full of files to be converted.

    Read the article

  • Logs being flooded from Squid for having intercepted and authentication enabled together

    - by Horace
    I have done some hefty Google'ing and I can't seem to find a single solution to this issue that I cam currently experiencing. Here is a sample configuration from squid that I have: # # DIGEST Auth # auth_param digest program /usr/sbin/digest_file_auth /etc/squid/digpass auth_param digest children 8 auth_param digest realm LHPROJECTS.LAN Network Proxy auth_param digest nonce_garbage_interval 10 minutes auth_param digest nonce_max_duration 45 minutes auth_param digest nonce_max_count 100 auth_param digest nonce_strictness on # Squid normally listens to port 3128 # Squid normally listens to port 3128 http_port 192.168.10.2:3128 transparent https_port 192.168.10.2:3128 intercept http_port 192.168.10.2:3130 As noted above, I have three ports defined, 2 of them are transparent/intercept and one is a regular http port (which I use for authentication). Which works rather well in this configuration however my logs are getting flooded of this entry authentication not applicable on intercepted requests whenever a transparent connection is made. So far, I can't seem to find any documentation that would describe how to suppress these messages ?

    Read the article

  • what to do when ctrl-c can't kill a process?

    - by Dustin Boswell
    Ctrl-c doesn't always work to kill the current process (for instance, if that process is busy in certain network operations). In that case, you just see "^C" by your cursor, and can't do much else. What's the easiest way to force that process to die now without losing my terminal? Summary of answers below: Usually, you can Ctrl-z to put the process to sleep, and then do "kill -9 process-pid", where you find the process's pid with 'ps' and other tools. On Bash (and possibly other shells) you can do "kill -9 %1" (or '%N' in general) which is easier. If Ctrl-z doesn't work, you'll have to open another terminal and kill from there.

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • How can I make Excel documents open in different windows?

    - by Eugene
    Office 2007, Windows Server 2008 x64. How can I make Excel so that when I double-click a document, it opens in a new Excel instance, so that I can easily view them side-by-side as separate windows and not using the View-Arrange All functionality? Now I have to go to the task bar, click on one document to see it and then click on the other document in the task bar to switch to that one. As the alternative, I close one document, open a new Excel window, then drag the document in there. Thank you.

    Read the article

  • nVidia 9800 GTX+ X11 fails to initialize. no unity or lightdm

    - by rlemon
    I have just upgraded my work pc to 12.04 (not upgrade, fresh install), installing updates during the install, and after everything has loaded (with no errors) and I restart I get brought to console 1 tty login. Console 7 looks like this: IIRC I did not have to finagle with my drivers on 11.10 to get this card working. If this is in fact a driver bug I will remove this post and submit the bug but i'm not 100% confident that it is. I attempted to run unity --reset and got this: Lastly I tried $ sudo apt-get install nvidia-current which tells me nvidia-current is already the newest version. so I ran $ sudo dpkg-reconfigure nvidia-current which says /usr/sbin/dpkg-reconfigure: nvidia-current is broken or not fully installed. Anything I can try from here would be awesome. Currently the only way to get the system up and running was to shut down, plug one of my monitors into the onboard video, enable the onboard video card from the BIOS, then boot back up (and on my single monitor everything is fine). update So I have been able to boot fresh with the ext card plugged in as long as I don't take the updates with the install. past this if I only install the nvidia drivers (nvidia-current or nvidia-current-updates) from the main server (or canadian) I then get the problems.. My proposal; which I don't know where to look for: Can I try installing the previous version of this driver? In the past, on another machine I had issues with my NIC driver being funky... downgraded to the previous driver and bam everything was merry and well.

    Read the article

  • Difference between SSLCertificateFile and SSLCertificateChainFile?

    - by chrisjlee
    Normally with a virtual host an ssl is setup with the following directives: Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt From: For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work? What is the difference between SSLCertificateFile and SSLCertificateChainFile ? The client has purchased a CA key from GoDaddy. It looks like GoDaddy only provides a SSLCertificateFile (.crt file), and a SSLCertificateKeyFile (.key file) and not at SSLCertificateChainFile. Will my ssl still work without a SSLCertificateChainFile path specified ? Also, is there a canonical path where these files should be placed?

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • find files their name is smaller or greater than a given parameter

    - by Tzury Bar Yochay
    Say that in a given directory I got tzury@x200:~/Desktop/sandbox$ ls -l total 20 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P000 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P001 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P002 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P003 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P004 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P000 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P001 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P002 I seek for a bash way to grab the list of files which their name is either grater or smaller than a given parameter, for instance: $ my_finder lt N00.P003 shall return N00.P000, N00.P001 and N00.P002 $ my_finder gt N00.P003 shall return N00.P004, N01.P000, N01.P001 and N01.P002 I was thinking of iterating over for name in $(ls) and while $name != $2 but believe there are more elegant way of doing so

    Read the article

  • Hosting and domain registrations for multiple clients

    - by letseatfood
    I am finally getting regular work desiging, developing, and deploying websites for small businesses and individuals. So far the websites utilize single-user content management systems, so the websites create, as far as I know, minimal load on the shared servers. I have always required that each of my clients purchase annual shared hosting at Dreamhost. For domain registration, I ask that they register with Dreamhost, but some already have a registered domain elsewhere and this is fine with me. I do this so the billing issues are the client's responsibility, not mine. My question is: Since I can register unlimited domains and connect them to my one shared hosting account at Dreamhost, should I not be requiring clients to individually pay for shared hosting and a domain? Should I actually be paying for one hosting account and then hosting all of my client's websites on that account? As I said before, I currently have each client buy their own hosting, because I feel that, for example, if there is high traffic to their site, there would be less a chance of the site going down than if their site was hosted with many others on one account. I am famous for being long-winded, please let me know if I can clarify at all. Thanks!

    Read the article

  • Catch headset pause/play keypresses in Windows

    - by akshay2000
    I have a new Ultrabook which has single audio jack for input and output instead for separate 3.5 mm jacks we used to have on older machines. The jack is probably similar to American Audio Jack specification or like the one found on Macbook Pro. I have tried to use it with the Apple, HTC, Nokia earphones which ship with most of the smartphones. Microphone on the headset works the way it should. Thing is that the headsets also come with remote controls to control volume and playback. I am sure that those key presses are sent to the Windows. I was hoping to catch those events and bind those to actual media keys so that I can control music playback. I guess this happens on Macs. I want to do the similar thing on the Windows. I'm just not sure where I can catch the events. Driver level? Application level?

    Read the article

  • How to use chain.p7b with Apache?

    - by Debianuser
    I wanted to setup a SSL website on Apache and applied for a certificate from my local ISP. All they sent me was a single file named chain.p7b. I have always used certificates from other vendors without any issues but they usually provide two files to be configured as SSLCertificateFile and SSLCertificateChainFile in Apache. Following instructions from several online resources, I opened the p7b file in Windows and extracted 4 certificates from the file. I then tried configuring Apache with one of the files and it worked, but shows a warning: The certificate is not trusted because no issuer chain was provided. I though I have to use remaining 3 files as SSLCertificateChainFile and/or SSLCACertificateFile. I tried that but it didn't work so I am assuming it might be something completely different. Anyone faced this issue before? The following page http://www-01.ibm.com/support/docview.wss?uid=swg21458997 talks about using a keystore but is that relevant to Apache?

    Read the article

< Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >