Search Results

Search found 282 results on 12 pages for 'jean'.

Page 6/12 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Conventional Parallel Inserts do Exist in Oracle 11

    - by jean-pierre.dijcks
    Had an interesting chat with Greg about said topic and searching showed the following link to discuss this topic in some detail (no reason for me to repeat this). insert /*+ noappend parallel(t1) */ into t1 select /*+ parallel(t2) */ * from t2 generates a load table conventional and does give you a parallel insert without doing a direct path insert. As this is missing from the official documentation it is probably something few people actually know existed, so kudos to Randolf Geist.

    Read the article

  • Upgrade failed, now impossible to restart

    - by Jean Claude Dispaux
    I have an Aspire One with Ubuntu, that I use only when traveling, i.e. seldom. Yesterday I tried to start it, it informed me that I had to install a new release of Ubuntu. The download went fine, then I left it for the night. In the morning I found error messages. I tried to restart, but nothing works any longer. The only backup I have is two USB keys made by the person who installed Ubuntu, that say Recovery Ubuntu 8 and Ubuntu 9.10 respectively. Right now I plugged the "8", selected F12 and instructed to boot from the USB key. It has been running for an hour, the screen still says ubuntu, the USB key flashes red. By the way, I have no precious data on this machine, I do not care about losing data. Please advise on what to do now. Thanks.

    Read the article

  • Dependancies not met while instaling digikam 2.9

    - by Jean-Mi
    I'm trying to install digikam 2.9 from ttp://ppa.launchpad.net/philip5/kubuntu-backports/ubuntu here is the message I got: Les paquets suivants ont des dépendances non-satisfaites : digikam: Depends: libkdecore5 (>= 4:4.7.0) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkdeui5 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkfile4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkhtml5 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkio5 (>= 4:4.7.0) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkipi9 (>= 4:4.8.80) mais la version 4:4.8.4d-precise~ppa1 va être installée Depends: libknotifyconfig4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkparts4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libnepomuk4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libphonon4 (>= 4:4.2.0) mais la version 4:4.7.0really4.6.0-0ubuntu1 va être installée Depends: libqt4-dbus (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqt4-qt3support (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqt4-sql (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqt4-xml (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqtcore4 (>= 4:4.8.0) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqtgui4 (>= 4:4.8.0) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libsolid4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: digikam-data (= 4:2.9.0-precise~ppa1kde49) mais la version 4:2.9.0-precise~pp Can someone help ? JM

    Read the article

  • Simple Architecture Verification

    - by Jean Carlos Suárez Marranzini
    I just made an architecture for an application with the function of scoring, saving and loading tennis games. The architecture has 2 kinds of elements: components & layers. Components: Standalone elements that can be consumed by other components or by layers. They might also consume functionality from the model/bottom layer. Layers: Software components whose functionality rests on previous layers (except for the model layer). -Layers: -Models: Data and it's behavior. -Controllers: A layer that allows interaction between the views and the models. -Views: The presentation layer for interacting with the user. -Components: -Persistence: Makes sure the game data can be stored away for later retrieval. -Time Machine: Records changes in the game through time so it's possible to navigate the game back and forth. -Settings: Contains the settings that determine how some of the game logic will apply. -Game Engine: Contains all the game logic, which it applies to the game data to determine the path the game should take. This is an image of the architecture (I don't have enough rep to post images): http://i49.tinypic.com/35lt5a9.png The requierements which this architecture should satisfy are the following: Save & load games. Move through game history and see how the scoreboard changes as the game evolves. Tie-breaks must be properly managed. Games must be classified by hit-type. Every point can be modified. Match name and player names must be stored. Game logic must be configurable by the user. I would really appreciate any kind of advice or comments on this architecture. To see if it is well built and makes sense as a whole. I took the idea from this link. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

    Read the article

  • Winner of the 2012 Government Big Data Solutions Award

    - by Jean-Pierre Dijcks
    Hot off the press: The winner of the 2012 Government Big Data Solutions Aware is the National Cancer Institute!! Read all the details on CTOLabs.com. A short excerpt to wet your appetite: "... This solution, based on the Oracle Big Data Appliance with the Cloudera Distribution of Apache Hadoop (CDH), leverages capabilities available from the Big Data community today in pioneering ways that can serve a broad range of researchers. The promising approach of this solution is repeatable across many other Big Data challenges for bioinfomatics, making this approach worthy of its selection as the 2012 Government Big Data Solution Award." Read the entire post. Congrats to the entire team!!

    Read the article

  • undefined control sequence in a NOWEB document

    - by Jean Baldraque
    I'm writing a TeX-noweb document. I compile it with noweave -tex -filter "elide comment:*" texcode.nw > documentation.tex but when I try to compile the resulting file with xetex -halt-on-error documentation.tex I obtain the following error message ! Undefined control sequence. <argument> ...on}\endmoddef \nwstartdeflinemarkup \nwenddeflinemarkup It seems that \nwenddeflinemarkup is not recognized. If i delete from the document all the sequences \nwstartdeflinemarkup\nwenddeflinemarkup the document compile without exceptions. What can be the problem?

    Read the article

  • Big data: An evening in the life of an actual buyer

    - by Jean-Pierre Dijcks
    Here I am, and this is an actual story of one of my evenings, trying to spend money with a company and ultimately failing. I just gave up and bought a service from another vendor, not the incumbent. Here is that story and how I think big data could actually fix this (and potentially prevent some of this from happening). In the end this story should illustrate how big data can benefit me (get me what I want without causing grief) and the company I am trying to buy something from. Note: Lots of details left out, I have no intention of being the annoyed blogger moaning about a specific company. What did I want to get? We watch TV, we have internet and we do have a land line. The land line is from a different vendor then the TV and the internet. I have decided that this makes no sense and I was going to get a bundle (no need to infer who this is, I just picked the generic bundle word as this is what I want to get) of all three services as this seems to save me money. I also want to not talk to people, I just want to click on a website when I feel like it and get it all sorted. I do think that is reality. I want to just do my shopping at 9.30pm while watching silly reruns on TV. Problem 1 - Bad links So, I'm an existing customer of the company I want to buy my bundle from. I go to the website, I click on offers. Turns out they are offers for new customers. After grumbling about how good they are, I click on offers for existing customers. Bummer, it goes to offers for new customers, so I click again on the link for offers for existing customers. No cigar... it just does not work. Big data solutions: 1) Do not show an existing customer the offers for new customers unless they are the same => This is only partially doable without login, but if a customer logs in the application should always know that this is an existing customer. But in general, imagine I do this from my home going through the internet service of this vendor to their domain... an instant filter should move me into the "existing customer route". 2) Flag dead or incorrect links => I've clicked the link for "existing customer offers" at least 3 times in under 5 seconds... Identifying patterns like this is easy in Hadoop and can very quickly make a list of potentially incorrect links. No need for realtime fixing, just the fact that this link can be pro-actively fixed across my entire web domain is a good thing. Preventative maintenance! Problem 2 - Purchase cannot be completed Apart from the fact that the browsing pattern to actually get to what I want is poorly designed, my purchase never gets past a specific point. In other words, I put something into my shopping cart and when I want to move on the application either crashes (with me going to an error page) or hangs or goes into something like chat. So I try again, and again and again. I think I tried this entire path (while being logged in!!) at least 10 times over the course of 20 minutes. I also clicked on the feedback button and, frustrated as I was, tried to explain this did not work... Big Data Solutions: 1) This web site does shopping cart analysis. I got an email next day stating I have things in my shopping cart, just click here to complete my purchase. After the above experience, this just added insult to my pain... 2) What should have happened, is a Hadoop job going over all logged in customers that are on the buy flow. It should flag anyone who is trying (multiple attempts from the same user to do the same thing), analyze the shopping card, the clicks to identify what the customers wants, his feedback provided (note: always own your own website feedback, never just farm this out!!) and in a short turn around time (30 minutes to 2 hours or so) email me with a link to complete my purchase. Not with a link to my shopping cart 12 hours later, but a link to actually achieve what I wanted... Why should this company go through the big data effort? I do believe this is relatively easy to do using our Oracle Event Processing and Big Data Appliance solutions combined. It is almost so simple (to my mind) that it makes no sense that this is not in place? But, now I am ranting... Why is this interesting? It is because of $$$$. After trying really hard, I mean I did this all in the evening, and again in the morning before going to work. I kept on failing, But I really wanted this to work... so an email that said, sorry, we noticed you tried to get a bundle (the log knows what I wanted, where I failed, so easy to generate), here is the link to click and complete your purchase. And here is 2 movies on us as an apology would have kept me as a customer, and got the additional $$$$ per month for the next couple of years. It would also lead to upsell on my phone package etc. Instead, I went to a completely different company, bought service from them. Lost money for company A, negative sentiment for company A and me telling this story at the water cooler so I'm influencing more people to think negatively about company A. All in all, a loss of easy money, a ding in sentiment and image where a relatively simple solution exists and can be in place on the software I describe routinely in this blog... For those who are coming to Openworld and maybe see value in solving the above, or are thinking of how to solve this, come visit us in Moscone North - Oracle Red Lounge or in the Engineered Systems Showcase.

    Read the article

  • Delete button in Ubuntu One Notes

    - by Jean MC13
    For the whishlist... The 'delete button' in Ubuntu One Notes is exactly in the same place as the 'save' button. So I saved a long note, but as my finger clicked two times on the save button, the second click was on the 'delete' one ! Lost without return possible. <:-(( This annoying feature could easily be changed : - put the delete button in another place - before deleting : ask for confirmation - offer a way to cancel ('undelete' function)

    Read the article

  • Two interesting big data sessions around Openworld

    - by Jean-Pierre Dijcks
    For those who want to talk (not listen) about big data, here are 2 very cool sessions: BOF9877 - A birds of a feather session around all things big data. It is on Monday, Oct 1, 6:15 PM - 7:00 PM - Marriott Marquis - Golden Gate. While all guests on the panel are special, we will have very special guest on the panel. He is a proud owner of a Big Data Appliance (see here). Then there is a Big Data SIG meeting (the invite from Gwen): I'd like to invite everyone to our OOW12 meet up. We'll meet on Tuesday, October 2nd, 8:45 to 9:45 at Moscone West Level 3, Overlook 3. We will network, socialize and discuss plans for the group. Which topics interest us for webinars? Which conferences do we want to meet in? What other activities we are interested in? We can also discuss big data topics, show off our great work, and seek advice on the challenges. Other than figuring out what we are collectively interested in, the discussion will be pretty open. Here is the official invite. See you at Openworld!!

    Read the article

  • Call for Paper: Oracle OpenWorld 2011

    - by jean-pierre.dijcks
    OpenWorld 2011 is now open for the public to submit session proposals. We would like to encourage our customers, and partners to participate in this ‘call for papers” (CFP) process. CFP for the general public, non-Oracle employee submitters, closes on March 27, 2011. Here are the details: Conference Location: Moscone Convention Center, San Francisco, CA. Conference Date: Sunday - Thursday, October 2 - 6, 2011 Conference Website: http://www.oracle.com/us/openworld CFP Website: https://oracleus.wingateweb.com/portal/cfp/ Paper submission key dates: Deliverables Due Dates Call for Papers Begins Wednesday, March 9 Call for Papers Ends Sunday, March 27 – 11:59 pm PDT Notifications for Accepted and Declined Submissions Sent End of May Questions regarding the Call for Papers, send an email to [email protected]

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • Speak now! Call for Papers at Oracle Openworld is now open

    - by Jean-Pierre Dijcks
    Present Your Thoughts to Thousands of Oracle Customers, Developers, and Partners Do you have an idea that could improve best practices? A real-world experience that could shed new light on IT? The Oracle OpenWorld call for papers is now open. This is your opportunity to speak your mind to the world’s largest gathering of the most-knowledgeable IT decision-makers, leading-edge developers, and advanced technologists. So take a look at our criteria and join us at Oracle OpenWorld. We look forward to hearing from you. Register Early and Save See and learn about the newest products. Meet experts and business leaders. All for less. Register for Oracle OpenWorld before March 30, 2012, and you’ll save up to US$800 off the registration. Register now.

    Read the article

  • E-Book on big data (featuring Analysts, Customers and more)

    - by Jean-Pierre Dijcks
    As we are gearing up for Openworld, here is a nice E-book on big data to start paging through. It contains Gartner's take on big data, customer and partner interviews and a lot more good info. Enjoy the read so you come prepared for Openworld!! Read the E-Book here. For those coming to Oracle Openworld (or the Americas Cup races around the same time), you can find big data sessions via this URL. Enjoy!!

    Read the article

  • Read how a customer uses Oracle NoSQL Database

    - by Jean-Pierre Dijcks
    For those who have had the pleasure to be in SF for Oracle Openworld, you might have seen or heard about this story already. If you did not, here is a great story on how to use Oracle NoSQL Database. Apart from all the cool technology, I'm just excited that this is a company founded by a football international and dealing with sports data, games and other cool things. Like an all things cool combo in one place.

    Read the article

  • Using R on your Oracle Data Warehouse

    - by jean-pierre.dijcks
    Since it is Predictive Analytics World in our backyard (or are we San Francisco’s backyard…?) I figured it is well worth the time to dust of some old but important news. With big data (should we start calling it “any data analytics” instead?) being the buzz word and analytics the key operative goal, not moving data around is becoming more and more critical to the business users. Why? Because instead of spending time on moving data around into your next analytics server you should be running analytics on those CPUs. You could always do this with Oracle Data Mining within the Oracle Database. But a lot of folks want to leverage R as their main tool. Well, this article describes how you can do this, since 2010… As Casimir Saternos concludes in the article; “There is a growing awareness of the need to effectively analyze astronomical amounts of data, much of which is stored in Oracle databases. Statistics and modeling techniques are used to improve a wide variety of business functions. ODM accessed using the R language increases the value of your data by uncovering additional information. RODM is a powerful tool to enable your organization to make predictions, classify data, and create visualizations that maximize effectiveness and efficiencies.” Happy Analysis!

    Read the article

  • Big Data Sessions at Openworld 2012

    - by Jean-Pierre Dijcks
    If you are coming to San Francisco, and you are interested in all the aspects to big data, this Focus On Big Data is a must have document.  Some (other) highlights: A performance demo of a full rack Big Data Appliance in the engineered systems showcase A set of handson labs on how to go from a NoSQL DB to an effective analytics play on big data Much, much more See you all in a few weeks in SF!

    Read the article

  • multi-thread in mmorpg server

    - by jean
    For MMORPG, there is a tick function to update every object's state in a map. The function was triggered by a timer in fixed interval. So each map's update can be dispatch to different thread. At other side, server handle player incoming package have its own threads also: I/O threads. Generally, the handler of the corresponding incoming package run in I/O threads. So there is a problem: thread synchronization. I have consider two methods: Synchronize with mutex. I/O thread lock a mutex before execute handler function and map thread lock same mutex before it execute map's update. Execute all handler functions in map's thread, I/O thread only queue the incoming handler and let map thread to pop the queue then call handler function. These two have a disadvantage: delay. For method 1, if the map's tick function is running, then all clients' request need to waiting the lock release. For method 2, if map's tick function is running, all clients' request need to waiting for next tick to be handle. Of course, there is another method: add lock to functions that use data which will be accessed both in I/O thread & map thread. But this is hard to maintain and easy to goes incorrect. It needs carefully check all variables whether or not accessed by both two kinds thread. My problem is: is there better way to do this? Notice that I said map is logic concept means no interactions can happen between two map except transport. I/O thread means thread in 3rd part network lib which used to handle client request.

    Read the article

  • A message to Denis Pitcher

    - by guybarrette
    Denis Pitcher, You posted this comment on my blog and some other blogs: Devteach's promotion for a one year MSDN subscription was not honoured and attempts to contact them result in a "we sent attendee info to MS, it's not our problem" response while attempts to contact Microsoft result in the suggestion that any queries should be redirect to Devteach. Hopefully not all attendees we're cheated though if you're considering attending a future Devteach it is recommended that you don't hold any expectation that they'll honour their promotions. I spoke to Jean-René Roy, DevTeach organizer and also to MSDN Canada folks.  Looks like the email you used to register for the conference is now bouncing (maybe a typo when you registered?).  That why you haven’t received any news about the offer.  The fact that you’re leaving the same comment on various blogs without your email address doesn’t help at all.  Thay want to contact you!  Also, looks like they never received your emails, maybe you used a the wrong email addresses. Anyway, please contact Jean-René Roy at [email protected] ASAP.

    Read the article

  • Bridging The Gap Between Developers And Testers With VS 2010

    - by Vincent Grondin
    On January 29th Etienne Tremblay and I presented infront of roughly 120 people in Ottawa a 7 hours "sketch" on how VS 2010 and TFS 2010 can help both devs and testers in their respective work.  The presentation focused on how a testers' work can positively influence a developers' work and vice versa.  The format was quite unusual as I said it's a "sketch" where Etienne and I "ignore" the audience and we do as if we were at work and the audience is sort of "spying" on us.  In all I'm quite pleased with the content we presented and the format sure was alot of fun to render and I think the audience liked it too...  The good news for you people reading this post is that it got RECORDED and it's now available for download in quick 25 to 35 minutes format on the dev teach web site:  http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx   There where 2 cameras, one filming us and one capturing the screen for our demos.  We switch from one to another in an intersting flow and Jean-René Roy made sure he kept all our goofs and didn't edit those funny "oups moments" where we screw-up in the scenario...  Mostly educative but hilarious at times !!! I encourage you all to download and watch the 13 episodes...  Follow a day at work for a tester and a developper using VS 2010 and TFS 2010 to improve their chemistry !  Thanks to Jean-René Roy for all the work he's put into this event and to Microsoft and Pyxis for sponsoring the event.

    Read the article

  • Podcast Show Notes: The Big Deal About Big Data

    - by Bob Rhubart
    This week the OTN ArchBeat kicks off a three-part series that looks at Big Data: what it is, its affect on enterprise IT, and what architects need to do to stay ahead of the big data curve. My guests for this conversation are Jean-Pierre Dijks and Andrew Bond . Jean-Pierre, based at Oracle HQ in Redwood Shores, CA, is product manager for Oracle Big Data Appliance and Oracle's big data strategy. Andrew Bond  is Head of Transformation Architecture for Oracle, where he specialzes in Data Warehousing, Business Intelligence, and Big Data. Andrew is based in the UK, but for this conversation he dialed in from a car somewhere on the streets of Amsterdam. Listen to Part 1What is Big Data, really, and why does it matter? Listen to Part 2 (Oct 10)What new challenges does Big Data present for Architects? What do architects need to do to prepare themselves and their environments? Listen to Part 3 (Oct 17)Who is driving the adoption of Big Data strategies in organizations, and why? Additional Resources http://blogs.oracle.com/datawarehousing http://www.facebook.com/pages/OracleBigData https://twitter.com/#!/OracleBigData Coming Soon A conversation about how the rapidly evolving enterprise IT landscape is transforming the roles, responsibilities, and skill requirements for architects and developers. Stay tuned: RSS

    Read the article

  • nginx reload failing: `object version does not match bootstrap parameter`

    - by Jean Jordaan
    I added a server stanza to my virtual.conf, and now nginx seems to have a problem reloading the config. At this point I don't know what exactly is going wrong or how to debug better. Any help would be most appreciated. The config test succeeds: root@server:~# service nginx configtest nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful I'm tailing the logfile. Upon reload, the following error is logged. As far as I can see, the new config is not used. root@server:~# service nginx reload Reloading nginx: [ OK ] root@server:~# ==> /var/log/nginx/error.log <== nginx object version 0.8.54 does not match bootstrap parameter 1.0.15 at /usr/lib64/perl5/XSLoader.pm line 94. Compilation failed in require. BEGIN failed--compilation aborted. 2012/10/18 12:31:07 [alert] 9620#0: perl_parse() failed: 2 This is the version of nginx I'm running: root@server:~# yum info nginx Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: ftp.udc.es * epel: mirror.nl.leaseweb.net * extras: ftp.udc.es * updates: ftp.cica.es Installed Packages Name : nginx Arch : x86_64 Version : 1.0.15 Release : 2.el6 [...] Server OS: CentOS release 6.3 (Final)

    Read the article

  • bootmgr is missing

    - by jean
    I have a Toshiba and it is windows 7 and as soon as i turn my computer on it says bootmgr is missing. the only thing i can get into is the setup menu. does anyone know what might be wrong because my step brother thinks that it might be that everything was erased off my hard drive. the last thing he did when he used it was the toshiba updates and restarted the computer. so if any one knows what might be wrong or how i can get my computer up and running please let me know.

    Read the article

  • Migrating from Apache2 to Lighttpd creating errors in PHP/mySQL?

    - by Jean-Philippe Murray
    Ok, I've been using basics ubuntu LAMP setups for years now, and I wanted to give lighttpd a try. My LAMP setup run in a virtual machine with scripts running just fine. So I created a new virtual machine, starting with a fresh install of ubuntu and made my setups. On this new VM, lighttpd + php works just fine. (Or at least it seems...) Problem occurs when I take the scripts from my LAMP setup and upload them to the new VM. I'm getting : Warning: mysql_real_escape_string(): Access denied for user 'www-data'@'localhost' (using password: NO) My lighttpd setup is configured as php-cgi but not my apache2 setup. Could this be the source of the problem? I think that scripts would be independent of the server configuration, so I doubt it. Also, I know that my DB connexion informations are good (as I can log in via phpmyadmin perfectly). I'm in the dark here, any pointers ? Thanks,

    Read the article

  • I am unable to get the subdomain from the URL in NGINX

    - by Jean-Nicolas Boulay Desjardins
    I am unable to get the subdomain from the URL in NGINX. Here is my config: server { listen 80; server_name ~^(?<appname>)\.example\.com$; rewrite ^ https://$appname.example.com$request_uri? permanent; } When I do: http://bob.example.com/ I am sent to: https://.example.com/ I don't know what I am doing wrong. I am using NGiNX 1.2.7. I have another config for the: http://example.com/ So I have one server block for the domain without the subdomain and the second with the subdomain... This is about the subdomain.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12  | Next Page >