Search Results

Search found 10695 results on 428 pages for 'none none'.

Page 196/428 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Apache2's recursive directory permission requirement

    - by Sn3akyP3t3
    The experience I've had thus far is from Ubuntu 10.04 and 12.04 64 bit OS so if there are other OS differences I'd like to know if this is an OS specific problem or not. The issue I've experienced is mostly confusion. Once the cause of the problem is identified and corrected there are no further related problems experienced. The symptom is Error 403 forbidden. Typically the cause is attempting to use a directory other than /var/www/ for content. The cause is simply permissions, but its puzzling why the required permissions must persist from at least one level deeper than root onward till the current working directory where the content is stored. For example: Alias /example/ "/home/user/permissions/can/be/confusing/with/apache/" <Directory /home/user/permissions/can/be/confusing/with/apache/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> With www-data being the user that spawned apache and "user" being a member of the www-data group. Thus, if ownership of /home/user/* is user:user then all that is necessary to display content with apache is permssions of read and execute. So d---r-x--- should suffice, but for practical purposes I'm using drwxr-x--- for most. However, if all directories /home/user/* are permissions of drwxr-x-- and /home/user/ itself has permissions of drwx------ then content will always fail with error 403. This is strange because it doesn't follow what I would consider traditional logic of permissions which should only be applicable to the current working directory or a particular file in that directory and not any directory further back in the chain. Is this by design or is it a bug?

    Read the article

  • Dell Latitude will not boot after fresh Ubuntu 12.10 installation. Black screen

    - by James
    I have an old dell latitude d610 that I've just installed ubuntu 12.10 onto, and now unfortunately it will not complete booting up. The screen flashes purple but then fades out and dies. I'm fairly sure there is an issue with graphics drivers, as I had to turn on "nomodeset" in the options when installing off a DVD to see the installer, but I'm very new to linux and don't know terribly much at all apart from what I've read on the net. I have been able to hold down shift and bring up GRUB, and entered into recovery mode, and when I enter into the low graphics mode, the xserver log file says it can detect a screen but found none with a useable configuration, then goes on to say a fatal server error has occurred and no screen have been found at all, telling me to check a log file at "/var/log/xorg.0.log" I have no idea how to check this specific log file! Attempting to actually go further than this in low graphics mode and restart the display simply artefacts the screen. It is all very strange and annoying because I did once by random have the machine boot completely for no apparent reason, but upon restarting the issue reoccurred.

    Read the article

  • Apache no longer starts at Windows boot up

    - by w3d
    I have Apache installed as part of XAMPP - local test server. It is configured as a Windows (XP) Service. Startup type is "Automatic". For a long time now it has always started when Windows boots up, but recently this has stopped happening. I now need to start it manually via the XAMPP Control Panel - at which point it appears to start up perfectly OK. The only recent updates to the machine (that I recall) are Windows Updates - none of which appear to have "known issues" that relate to this. And updates to Google Chrome. Any ideas what could prevent Apache from starting automatically at Windows (XP) boot up? EDIT#1 There are 2 related Errors in my system event log regarding the Service Control Manager: Timeout (30000 milliseconds) waiting for the Apache2.2 service to connect. The Apache2.2 service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. When I manually start the Apache server after boot up there are 2 "information" events stating that it was "sent a start control" and that it "entered the running state". Although I notice it appears to take 19 seconds between the start control being sent and entering a running state - according to the event log. So, maybe 30 seconds during boot up isn't long enough (anymore) for Apache to start??

    Read the article

  • Enum.HasFlag

    - by Scott Dorman
    An enumerated type, also called an enumeration (or just an enum for short), is simply a way to create a numeric type restricted to a predetermined set of valid values with meaningful names for those values. While most enumerations represent discrete values, or well-known combinations of those values, sometimes you want to combine values in an arbitrary fashion. These enumerations are known as flags enumerations because the values represent flags which can be set or unset. To combine multiple enumeration values, you use the logical OR operator. For example, consider the following: public enum FileAccess { None = 0, Read = 1, Write = 2, }   class Program { static void Main(string[] args) { FileAccess access = FileAccess.Read | FileAccess.Write; Console.WriteLine(access); } } The output of this simple console application is: The value 3 is the numeric value associated with the combination of FileAccess.Read and FileAccess.Write. Clearly, this isn’t the best representation. What you really want is for the output to look like: To achieve this result, you simply add the Flags attribute to the enumeration. The Flags attribute changes how the string representation of the enumeration value is displayed when using the ToString() method. Although the .NET Framework does not require it, enumerations that will be used to represent flags should be decorated with the Flags attribute since it provides a clear indication of intent. One “problem” with Flags enumerations is determining when a particular flag is set. The code to do this isn’t particularly difficult, but unless you use it regularly it can be easy to forget. To test if the access variable has the FileAccess.Read flag set, you would use the following code: (access & FileAccess.Read) == FileAccess.Read Starting with .NET 4, a HasFlag static method has been added to the Enum class which allows you to easily perform these tests: access.HasFlag(FileAccess.Read) This method follows one of the “themes” for the .NET Framework 4, which is to simplify and reduce the amount of boilerplate code like this you must write. Technorati Tags: .NET,C# 4

    Read the article

  • 3DS Max 2012 OBJ file import missing polygons

    - by Vit
    I started learning OpenGL. I got to a point I want to import some "real" objects. After "Googling" I decided I will go with OBJ file for start, since it is simple to understand, and there are plenty of tutorials on how to read them properly. I have from university access to 3DS Max 2012. So I tried to create very simple model (just deformated cube) and exporting it using OBJ file, just to vertices and triangles for the moment, without textures, so I can examine its structure by myself. But if I imported it right back to 3DS from OBJ file, now it renders somewhat strange, like its smoothen, and with lightsource, even I have none in scene. But the geometry, its wireframe is intact. So I thought maybe it is problem of exporting only vertices and triangles so I downloaded Enterprise-D model from internet, exported with everything on (normals, textures everything), and again imported it. Now, some polygons are missing. So, I want to ask, am I doing something terribly wrong, or is there some incompatibility issue between .max and .obj file ? Even it is only simple textured model without any lightsources, animation etc.? Thanks. Edit: I tried objects with MeshLab, the first, deformated cube was absolutelly OK. But still bothers me that 3DS Max doesen´t render it properly. In Enterprise-D model, there are polygons missing even in MeshLab. I uploaded rar archive with .max model of Enterprise, same .obj model exported from 3DS, and obj model of deformated cube. Download here (2.5 MB, filesonic).

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • How do I get a Canon MG, MP and MX series USB printer working?

    - by Old-linux-fan
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0 Here is what I have tried: Tried the HK link again, but no luck so far. However, I connected the printer wirelessly to router on other xp box. Installing the driver from ppa:michael-gruz/canon doesn't work, but the driver is installed

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • Which ecommerce framework is fast and easy to customize?

    - by Diego
    I'm working on a project where I have to put online an ecommerce system which will require some good amount of custom features. I'm therefore looking for a framework which makes customization easy enough (from an experienced developer's perspective, I mean). Language shoul be PHP and time is a constraint, I don't have months to learn. Additionally, the ecommerce will have to handle around 200.000 products from day one, which will increase over time, hence performance is also important. So far I examined the following: Magento - Complicated and, as far as I could read, slow when database contains many products. It's also resource intensive, and we can't afford a dedicated VPS from the beginning. OpenCart - Rough at best, documentation is extremely poor. Also, it's "free" to start, but each feature is implemented via 3rd party commercial modules. OSCommerce - Buggy, inefficient, outdated. ZenCart - Derived from OSCommerce, doesn't seem much better. Prestashop - It looks like it has many incompatibilities. Also, most of its modules are commercial, which increases the cost. In short, I'm still quite undecided, as none of the above seems to satisfy the requirements. I'm open to evaluate closed source frameworks too, if they are any better, but my knowledge about them is limited, therefore I'll welcome any suggestion. Thanks for all replies.

    Read the article

  • Help, broken Gsettings

    - by Rene
    I was trying to disable the global menu as per http://ubuntuhandbook.org/index.php/2013/07/disable-global-menu-on-ubuntu-13-10-saucy/#comment-8612, but while it didn't change anything, after running the autoremove command unity-tweak-tool broke. Obviously my first reaction was to re-install the removed package but it remains broken. TBH I don't know if it is even related or just a coincidence. When I start it from the launcher it just blinks and disappear. When I start it from terminal I get this error: $ gnome-tweak-tool WARNING : Shell not installed or running WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 199, in __init__ raise Exception("Shell not running or DBus service not available") Exception: Shell not running or DBus service not available INFO : GSettings missing key org.gnome.nautilus.desktop (key computer-icon-visible) WARNING : Shell not running None INFO : GSettings missing key org.gnome.mutter (key workspaces-only-on-primary) Segmentation fault (core dumped) I had a look with dconf-editor if I could just add the missing key, but apparently keys aren't meant to be added "by hand". So how can I fix this? I'd rather prefer not having to reinstall everything. Which package is broken, can I just reinstall that? EDIT: I found by being root gnome-tweak-tool no longer crashed so possibly a permission issue somewhere. I don't know that I changed any permissions. Another related problem, actually the reason I noticed the problem at all, is that unity-tweak-tool seem no longer to want to save the values edited. I normally just have the Unity launcher on the primary display but wanted to check what it was like having it on both. I didn't like it so I went into unity-tweak-tool to set it back - but regardless how many time I tick "only primary display" it never changes anything. What does the Unity-tweak-tool actually change and can I do this directly somehow?

    Read the article

  • First time application where to start?

    - by Nazariy
    After many years of searches and copy pasting, I'm still looking for simple solution that can transliterate text input on the fly from one key set to another. There are quite few online services that provide this feature but it still quite annoying to go online all the time. Unfortunately there is not that many applications left which are capable of doing so, and none of them supported by this day. I decided to make my own and at same time to learn something new for my self. The idea is quite simple: application should sit in system tray and wait until input language get changed, for example to Russian. If Russian language is activated, application should start to listen for user key strokes combination and replace them based on custom dictionary for example R = ?, SH = ? etc. I should be able to bind application to any installed language (Russian, Ukrainian, Bulgarian, Belarusian etc.) and customise dictionary for any of them. So my question is: Which language should I chose for this task C++, C# or might be something hardcore like Assembler, as application should work natively with Windows XP/Vista/7 or possibly Mac. (cross platform support is good but my main target is Windows) Due to nature of application behaviour how can I tell anti-virus software that it is not a "Key Logger" and basically not a virus? Where should I start and what should I be aware of? P.S. My current programming knowledge is quite basic, PHP and JavaScript with Object Oriented approach.

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • How can I make fsck run non-interactively at boot time?

    - by Nelson
    I have a headless Ubuntu 12.04 server in a datacenter 1500 miles away. Twice now on reboot the system decided it had to fsck. Unfortunately Ubuntu ran fsck in interactive mode, so I had to ask someone at my datacenter to go over, plug in a console, and press the Y key. How do I set it up so that fsck runs in non-interactive mode at boot time with the -y or -p (aka -a) flag? If I understand Ubuntu's boot process correctly, init invokes mountall which in turn invokes fsck. However I don't see any way to configure how fsck is invoked. Is this possible? (To head off one suggestion; I'm aware I can use tune2fs -i 0 -c 0 to prevent periodic fscks. That may help a little but I need the system to try to come back up even if it had a real reason to fsck, say after a power failure.) In response to followup questions, here's the pertinent details of my /etc/fstab. I don't believe I've edited this at all from what Ubuntu put there. UUID=3515461e-d425-4525-a07d-da986d2d7e04 / ext4 errors=remount-ro 0 1 UUID=90908358-b147-42e2-8235-38c8119f15a6 /boot ext4 defaults 0 2 UUID=01f67147-9117-4229-9b98-e97fa526bfc0 none swap sw 0 0

    Read the article

  • deWitters Game loop in libgdx(Android)

    - by jaysingh
    I am a beginner and I want a complete example in LibGDX for android(Fixed time game loop) how to limit the framerate to 50 or 60. Also how to mangae interpolation between game state with simple example code e.g. deWiTTERS Game Loop: @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } libgdx comments:- There is a Gdx.graphics.setVsync() method (generic = backend-independant), but it is not present in 0.9.1, only in the Nightlies. "Relying on vsync for fixed time steps is a REALLY bad idea. It will break on almost all hardware out there. See LwjglApplicationConfiguration, there's a flag in there that let s use toggle gpu/software vsynching. Play around with it." (Mario) NOTE that none of these limit the framerate to a specific value... if you REALLY need to limit the framerate for some reason, you'll have to handle it yourself by returning from render calls if xxx ms haven't passed since the last render call. li

    Read the article

  • Could not calculate upgrade from Maverick Meerkat to Natty Narwhal

    - by xralf
    I upgraded from Ubuntu Lucid Lynx to Maverick Meerkat with the following commands: sudo apt-get update && sudo apt-get upgrade sudo apt-get install update-manager-core sudo vi /etc/update-manager/release-upgrades and changed the last line to Prompt=normal sudo do-release-upgrade -d This upgrade was OK. I decided to repeat the same steps and to upgrade Maverick Meerkat to Natty Narwhal. It ended with this message: Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: Can not mark 'xubuntu-desktop' for upgrade This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Nov 21 09:37:21 2011) === === Command terminated with exit status 1 (Mon Nov 21 09:37:21 2011) === How can I correct it?

    Read the article

  • Will Google treat this JavaScript code as a bad practice?

    - by Mathew Foscarini
    I have a website that provides a custom UX experience implemented via JavaScript. When JavaScript is disabled in the browser the website falls back to CSS for the layout. To make this possible I've added a noJS class to the <body> and quickly remove it via JavaScript. <body class="noJS layout-wide"> <script type="text/javascript">var b=document.getElementById("body");b.className=b.className.replace("noJS","");</script> This caused a problem when the page loads and JavaScript is enabled. The body immediately has it's noJS class removed, and this causes the layout to appear messed up until the JavaScript code for layout is executed (at bottom of the page). To solve this I hide each article via JavaScript by adding a CSS class fix which is display:none as each article is loaded. <article id="q-3217">....</article> <script type="text/javascript">var b=document.getElementById("q-3217");b.className=b.className+" fix";</script> After the page is ready I show all the articles in the correct layout. I've read many times in Google's documentation not to hide content. So I'm worried that the Google will penalize my website for doing this.

    Read the article

  • SharpDx: using maximized RenderForm

    - by ceiling cat
    I'm trying to learn DirectX via SharpDX, very new to this. What I want to do is be able to draw 2D shapes for a game I'm trying to make. So I started with the demo "MiniRect" that came with SharpDX. Since I want my game to be full-screen, I changed the RenderForm to be maximized (using WindowState) and set the FromBorderStyle to None. I noticed that even if the form is set to maximized, it's size is always 800 by 600. In my renderloop, if I specify the location for the rectangle has 400 by 300, it is drawn in the middle of the screen. If I try to set the location via mouse-click (using the RenderForm's MouseClick event, there is always an offset present between where the mouse was clicked and where the drawing shows up. My system DPI is set to the standard (96) so there shouldn't be any scaling. But it looks like there is a scaling factor of about 2.4 If it's not the DPI settings, does anyone have any idea what this be related to? The problem doesnt happen if the RenderForm is not maximized. Is there another way to be drawing full-screen using SharpDX? Thanks

    Read the article

  • create a simple game board android

    - by user2819446
    I am a beginner in Android and I want to create a very simple 2D game. I've already programmed a Tic-Tac-Toe game. The drawing of the game board and connecting it with my game and input logic was quite difficult (as it was done separately, canvas drawing, calculating positions, etc). By now I figured out that there must be a simpler way. All I want is a simple grid; something like this: http://www.blelb.com/deutsch/blelbspots/spot29/images/hermannneg.gif. The edges should be visible and black, and each cell editable, containing either an image or nothing, so I can detect if the player is on that cell or not, move it... Think of it as Chess or something similar. Searching the internet during the last days, I am a bit overwhelmed of all the different options. After all, I think Gridview or Gridlayout is what I am searching for, but I'm still stuck. I hope you can help me with some good advice or maybe a link to a nice tutorial. I have checked several already, and none were exactly what I was searching for.

    Read the article

  • How to re-configure graphics from Intel integrated to Intel / ATI switchable?

    - by Bucic
    There are lots of 'how to get switchable graphics to work' guides but I found none on how to configure a system for switchable graphics operation on Ubuntu from the ground up, nor explaining the current driver situation for particular computer models (integrated+discrete combinations). Example: https://help.ubuntu.com/community/HybridGraphics My system being mature and on Intel integrated card also makes things complicated. System information: Ubuntu 12.04 amd64, installed clean with system configured to use only the integrated Intel card Lenovo Thinkpad T500 Intel GMA 4500MHD / ATI Mobility Radeon HD 3650 Current situation: Mature and up-to-date system with no configuration changes to what's given above. I've made a backup image of the system (Clonezilla) so regardless of what's written below let's assume it's our starting point. If something in What I have already tried is not clear you may as well diregard it. What I have already tried: Configuring BIOS to switchable graphics and: Installing Additional Hardware drivers - returned an error. Installing proprietary amd-driver-installer-12.6-legacy-x86.x86_64.run automatically - system starts to 'low-graphics mode'. Tried fixing as per https://help.ubuntu.com/community/BinaryDriverHowto/ATI#Manually_installing_Catalyst_12.6.2C_special_case_for_Intel.2BAC8-ATI_hybrid_graphics Got lost, gave up. Please note that while configuring BIOS for integrated graphics only is pretty straightforward, configuring for switchable graphics is not.

    Read the article

  • Bash script not adding variables to session

    - by travega
    I have a bash script that I have added as a startup application. It does a bunch of exports and alias assignment. #! /bin/bash alias devhm='cd ${DEV_HOME}; ll'; alias wlhm='cd ${WL_HOME}; ll'; alias dirch='watch --interval=1 "ls -la"'; alias vols='watch --interval=1 "df -h"'; alias svn-update='svn update --depth infinity ./*'; alias mci="~/mci.sh"; alias vncserver="vncserver -geometry 1680x1050"; alias ..="cd .."; alias hist="history | grep "; export PROXY_HOST=proxy.my.setup; export PROXY_PORT=3128; export LD_LIBRARY_PATH=$LD_LIBRARY_PATH/usr/lib/oracle/12.1/client64/lib; export ORACLE_HOME=/usr/lib/oracle/12.1/client64; export TNS_ADMIN=${ORACLE_HOME}/network/admin; echo "DONE!"; But none of these values are available in my terminal sessions anymore. Even when I run the script straight into the terminal like so: ./setup.sh I see the "DONE!" prompt printed but no aliases or env variables are set. If I copy and paste the contents of the file into the terminal the aliases and env variables are set. I have tried adding a line to execute the script from .bashrc also but still no aliases or env variables set. Any ideas what might be going on here? Also could anyone suggest a better way to have these env variables/aliases added to every terminal session?

    Read the article

  • How to read values from RESX file in ASP.NET using ResXResourceReader

    Here is the method which returns the value for a particular key in a given resource file. Below method assumes resourceFileName is the resource filename and key is the string for which the value has to be retrieved. public static string ReadValueFromResourceFile(String resourceFileName, String key)    {        String _value = String.Empty;        ResXResourceReader _resxReader = new ResXResourceReader(            String.Format("{0}{1}\\{2}",System.AppDomain.CurrentDomain.BaseDirectory.ToString(), StringConstants.ResourceFolderName , resourceFileName));        foreach (DictionaryEntry _item in _resxReader)        {            if (_item.Key.Equals(key))            {                _value = _item.Value.ToString();                break;            }        }        return _value;    } span.fullpost {display:none;}

    Read the article

  • How to get rid of devices in the right panel of Nautilus ?

    - by Patryk
    I would like to rid of not mounted ntfs partition from Nautilus' right panel ( I just want 352 GB Filesystem - d drive to be there. First of all 352 GB Filesystem is in fact d so I do not know why it is duplicated. Secondly I have put Acer and SYSTEM RESERVED to be nouser mounts on purpose, so that I (or sombody else) will not format it (or else) by accident. So my /etc/fstab looks like this : #comments....... # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 UUID=1384cee0-6a71-4b83-b0d3-1338db925168 / ext4 noatime,errors=remount-ro 0 1 UUID=e3729117-b936-4c1d-9883-aee73dab6729 none swap sw 0 0 #------ MY WINDOWS D DRIVE---------- UUID=98E8B14DE8B12A80 /media/d ntfs defaults,errors=remount-ro,user 0 0 # #-------ACER---------------- UUID=01CBEA9D4476C2F0 /media/acer ntfs defaults,noauto,noexec,ro,nouser 0 0 # #-------SYSTEM RESERVED----- UUID=01CBEA95760F9330 /media/systemreserved ntfs defaults,noauto,noexec,ro,nouser 0 0 #UUID=58F9-C17E /boot/efi vfat defaults 0 1 blkid and fdisk -l root@XXX:/home/YYY# fdisk -l ... Device Boot Start End Blocks Id System /dev/sda1 4096 27262975 13629440 27 Hidden NTFS WinRE /dev/sda2 27262992 27467791 102400 7 HPFS/NTFS/exFAT /dev/sda3 27467792 232267775 102399992 7 HPFS/NTFS/exFAT /dev/sda4 232267793 976771071 372251639+ f W95 Ext'd (LBA) /dev/sda5 232267795 918867967 343300086+ 7 HPFS/NTFS/exFAT /dev/sda6 * 918870016 968044543 24587264 83 Linux /dev/sda7 968046592 976771071 4362240 82 Linux swap / Solaris root@XXX:/home/YYY# blkid /dev/sda1: LABEL="PQSERVICE" UUID="01CBEA95730D28A0" TYPE="ntfs" /dev/sda2: LABEL="SYSTEM RESERVED" UUID="01CBEA95760F9330" TYPE="ntfs" /dev/sda3: LABEL="Acer" UUID="01CBEA9D4476C2F0" TYPE="ntfs" /dev/sda5: UUID="98E8B14DE8B12A80" TYPE="ntfs" /dev/sda6: UUID="1384cee0-6a71-4b83-b0d3-1338db925168" TYPE="ext4" /dev/sda7: UUID="e3729117-b936-4c1d-9883-aee73dab6729" TYPE="swap"

    Read the article

  • Asus Eee PC 1000HE wireless woes

    - by Vladimir Noobokov
    Ever since I have upgraded my Asus Eee PC 1000HE from Lucid 10.04 to Precise 12.04 I have been having issues with my wireless connections. At first I had wireless dropouts: I would be able to start using wireless, but then after a few minutes the wireless would stop working even though I was still connected to the network. Lately things turned worse: while I connect to my wireless network, it just never works. I tried all sorts of solutions on offer here and in other forums but none worked. At best I got the wireless to work up until I rebooted, at which point I would get the same symptoms again: the wireless network is there, but it's not really working. By now I tried so many different "solutions" I don't know where to start describing them; I have also reinstalled 12.04 several times, enough to make me lose faith in Ubuntu. Help here looks like my last resort. For the record, my Asus Eee PC 1000HE is equipped with an Atheros wireless card. I have reinstalled 12.04, ran all the suggested updates, and receive the following response when I type iwconfig in the terminal: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Arsenal" Mode:Managed Frequency:2.452 GHz Access Point: 00:04:ED:48:67:89 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-29 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:27 Invalid misc:57 Missed beacon:0 eth0 no wireless extensions. Thanks in advance for any help that might be offered.

    Read the article

  • Hot to get website/product reviews reflected in Google's search results using review-aggregate format

    - by BasP
    I am managing a website called Rent A Boat Amsterdam. We have a system that gathers reviews from people that have used our services and that publishes these customer reviews making them available for all website visitors. When these customer reviews are published we have placed them within the appropriate tags according to the guidelines set by Google, which you can find here. An example looks like this: <li class="" style="clear:both;"> <div class="hreview"> <div class="item" style="display:none;"><span class="fn">Boatname</span></div> <div style="border:1px solid #DEDEDE; background-color:#D9FFD4; margin:0 10px 10px 0; float:left; text-align:center; padding:10px; height:50px; width:70px;"><h1><span class="rating">10</span></h1>9-Jun-2010</div> <div> <div class="description"><p>Great canal Cruise!</p></div> <p class="reviewer vcard"><strong><span class="fn">First name Last name</span></strong></p> </div> </div> We have implemented these tags a couple of months ago, but there are no visible results in the Google SERP's. This whilst I had expected to find the reviews / ratings displayed similar to: Is anyone familiar with this topic and able to help me find the answer to the question why the review-aggregate format doesn't seem to have the desired effect?

    Read the article

  • Apache config file. Redirect permanent gives 403 error

    - by Homunculus Reticulli
    I am changing my domain from foo.com to foobar.org. I used a Redirect permanent in my apache config file, and then restarted apache. When I try to access the old domain foo.com, I get a 403 error. This is what my apache config file looks like: <VirtualHost *:80> ServerName foo.com #ServerAlias www.foo.com #ServerAdmin [email protected] Redirect permanent / http://www.foobar.org/ DocumentRoot /path/to/project/foo/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.errors.log" <Directory /> Order Deny,Allow Deny from all </Directory> <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> <Directory /path/to/project/foo/web> Options -Indexes -Includes AllowOverride All Allow from All RewriteEngine On # We check if the .html version is here (cacheing) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.])$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # No, so we redirect to our front end controller RewriteRule ^(.*)$ index.php [QSA,L] </Directory> <Directory /path/to/project/foo/web/uploads> Options -ExecCGI -FollowSymLinks -Indexes -Includes AllowOverride None php_flag engine off </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.8/data/web/sf <Directory /lib/vendor/symfony/symfony-1.3.8/data/web/sf> # Alias /sf /lib/vendor/symfony/symfony-1.4.19/data/web/sf # <Directory /lib/vendor/symfony/symfony-1.4.19/data/web/sf> Options -Indexes -Includes AllowOverride All Allow from All </Directory> </VirtualHost> Can anyone spot what I may be doing wrong?. The site foobar.org does exist so I don't know why this error occurs - help?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >