Search Results

Search found 4685 results on 188 pages for 'proper'.

Page 87/188 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Repeat use of Schema / Rich Snippets Markup i.e LocalBusiness Data

    - by bybe
    I am unable to find official wording and I'm hoping that some Rich Snippets/Schema Guru can give me some insight into proper usage of repeated content when it comes to using markup. I'm building a site that wants to use Schema as the markup type and the owner would like as much usage as possible. The business name, telephone and address will appear on every page now is it valid or even useful to use Rich Snippets on every page where this information is displayed. For example this information appears in the header, and footer of every page of the site and too give you an example of my current markup see below: <body itemscope itemtype="http://schema.org/LocalBusiness"> <header> <a itemprop="url" href="http://www.domain.co.uk/"> <img itemprop="logo" src="image.png" alt="Company Name Logo" /> </a> <span itemprop="telephone">01202 000 000</span> </header> <div> This is where the content will go</div> <footer> <span itemprop="name">Company Name</span> <span itemprop="description"> A small little bit about this company</span> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">Address Goes here</span> <span itemprop="addressLocality">Area Here</span>, <span itemprop="addressRegion">Region Here</span> </div> </footer> </body> !-- Local Business Schema Now Closed --> So as you can see above this information will be displayed on every single page.... Is this valid or bad to repeat usage of this information in schema format...

    Read the article

  • How can I make an unmounted / unmountable NTFS disk not show up in the nautilus devices area?

    - by Dennis
    I have an idea that my /etc/fstab is a real mish-mash and I don't remember how it got that way, first of all it looks like this UUID=9EB80807B807DD21 /media/Storage ntfs-3g users 0 0 UUID=a60397fd-964a-45b1-ad35-53c8a4bee010 / ext4 defaults 0 1 UUID=1764825d-b8ba-4620-b3b0-e979b6f4f5c4 swap swap sw 0 0 UUID=255DA1E406E29DBC /media/sda2 ntfs-3g defaults 0 0 UUID=2CCCF161CCF1262C /mnt/sda1 ntfs-3g umask=000 0 0 /dev/fd0 /media/floppy0 vfat noauto 0 0 I started with an old XP install on disk /dev/sda that I don't use anymore but didn't want to delete, so I shrunk the XP partition, added a NTFS partition that would be common to both systems (Labeled it "Common" in XP), then installed Lucid on an extended ext4 partition. On this disk of course the ext4 system partition comes up as /, the go between partition auto-mounts on /media/sda1 but shows up in Nautilus as COMMOM, while the XP system disk does not show up in Nautilus, but I can get to it by navigating to /mnt/sda1. A second hard drive (/dev/sdb) that I stuck in was already formatted NTFS with a bunch of stuff and labeled "Storage". It auto-mounts to /media/Storage but another un-mounted disk also shows up in the Nautilus device area called Storage but it can't be mounted (Here and in the "Places" are the only times it appears) I would primarily like this non-existant (or already mounted depending on how you look at it) disk to not show up, but I wouldn't mind an explanation of why one labeled partition auto-mounts to a /media mount point but shows up by label, one does not show up as mounted at all but mounts to a /mnt mount point and is there for navigation, and one is mounted to a directory of the same name as the label. I would love to have some consistancy / direction on what is proper in this circumstance. No doubt I caused this with the fstab but I really don't remember what my rational was if I edited it manually

    Read the article

  • What's cool about Lisp nowadays? [closed]

    - by Kos
    Possible Duplicates: Why is Lisp useful? Is LISP still useful in today's world? Which version is most used? First of all, let me clarify: I'm aware of Lisp's place in history, as well as in education. I'm asking about its place in practical application, as of 2011. The question is: What features of Lisp make it the preferred choice for projects today? It's widely used in various AI areas as far as I know, and probably also elsewhere. I can imagine projects choosing, for instance... Python because of its concise, readable syntax and it being dynamic, Haskell for being pure functional with a powerful type system, Matlab/Octave for the focus on numerics and big standard libraries, Etc. When should I consider Lisp the proper language for a given problem? What language features make it the preferred choice then? Is its "purity and generality" an advantage which makes it a better choice for some subset of projects than the modern languages? edit- On your demand, a little rephrase (or simply a tl;dr) to make this more specific: a) What problems are solvable with Lisp much more easily than with more common, modern languages like Python or C# (or even F# or Scala)? b) What language features specific for Lisp make it the best choice for those problems?

    Read the article

  • Checking preconditions or not

    - by Robert Dailey
    I've been wanting to find a solid answer to the question of whether or not to have runtime checks to validate input for the purposes of ensuring a client has stuck to their end of the agreement in design by contract. For example, consider a simple class constructor: class Foo { public: Foo( BarHandle bar ) { FooHandle handle = GetFooHandle( bar ); if( handle == NULL ) { throw std::exception( "invalid FooHandle" ); } } }; I would argue in this case that a user should not attempt to construct a Foo without a valid BarHandle. It doesn't seem right to verify that bar is valid inside of Foo's constructor. If I simply document that Foo's constructor requires a valid BarHandle, isn't that enough? Is this a proper way to enforce my precondition in design by contract? So far, everything I've read has mixed opinions on this. It seems like 50% of people would say to verify that bar is valid, the other 50% would say that I shouldn't do it, for example consider a case where the user verifies their BarHandle is correct, but a second (and unnecessary) check is also being done inside of Foo's constructor.

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Would it be possible to create an open source software library, entirely developed and moderated by an open community?

    - by Steven Jeuris
    Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible even within the Stack Exchange platform. Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open-source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Has anything like this been attempted before? Are there platforms better suited for this?

    Read the article

  • 12.04 disabling wireless via dbus does not work

    - by FlabbergastedPickle
    I am using a proprietary rt3652sta driver for my wireless card. It appears as a ra0 device on the 64-bit Ubuntu 12.04. According to the online documentation the following used to work definitely up to 10.04. dbus-send --system --type=method_call --dest=org.freedesktop.NetworkManager /org/freedesktop/NetworkManager org.freedesktop.DBus.Properties.Set string:org.freedesktop.NetworkManager string:WirelessEnabled variant:boolean:false This however has no effect on the aforesaid wireless card in 12.04. Also, rfkill does not work as it does not even list the wireless button (again, likely due to the wireless driver being proprietary): rfkill list It only lists the hci0 (bluetooth) one and one can block/unblock it accordingly but this has no effect on the wifi. ifup/down also does not work (AFAICT)... And this leaves me with disabling wireless through the network manager applet. However, trying to do so via dbus appears not to work and yet I would like to automate it via a script. Any ideas how I could find out the proper dbus structure for the call? Is this even possible in Ubuntu 12.04?

    Read the article

  • Unity calendar lens not showing events in Ubuntu 12.04

    - by David_G
    I'm trying to get proper/useful calendar integration into Ubuntu 12.04. I have a Google Calendar (& account) and I want to be able to use this without opening the browser. I want to get the Unity Calendar lens working, so that it shows events coming up, and it allows me a quick way to add new events. However, after installing it, it does not find any events, nor allow me to add a new event. Note that I've installed Lightning 1.4, Evolution mirror 0.2.3, Evolution, and unity-calendar lens. I've also installed Calendar-indicator. I suspect that somehow the lens is not getting the calendar information from thunderbird via evolution. A bit of searching around led me to try this command: /usr/lib/calendar-lens/calendar-lens-daemon.py. With this result: /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject Traceback (most recent call last): File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 324, in daemon = Daemon() File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 80, in init for calendar in evolution.ecal.list_calendars(): AttributeError: 'NoneType' object has no attribute 'list_calendars' Any ideas?

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • ExaLogic virtual datacenter live at Qualogy

    - by JuergenKress
    Just a quick post to celebrate another siginificant milestone for Exalogic! After a few days of preparations and some hard work we succeeded in upgrading our Exalogic quarterrack to the newly released Elastic Cloud version 2.0.1.1.0. This version was just recently released on July 25th. This new version turns your Exalogic into a virtual datacenter with many very neat cloud provisioning capabilities. There are many more possibilities in this version to provide strict network and vServer group isolation where needed and it helps you manage multitenancy and delegate your cloud administration. How did we fare? Apart from some small inconveniences and minor issues I can tell you it all went remarkably well, provided you do proper homework on the prerequisite requirements and you stick to the instructions all the way through (there’s some 37 steps to cover). We as a specialized Exalogic partner had early access to this version and did some early adopter work. As a customer this is all done for you as Oracle will deliver a new Exalogic with this version from the factory if you so desire, or upgrade your current ones. O fcourse, Qualogy can do this for you as well! Interested contact the Qualogy team [email protected]! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: ExaLogic,Qualogy,ExaLogic demo,Exalogic datacenter,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Seeking for a better solution to restrict access in GRUB2 menu

    - by LiveWireBT
    I just read that in certain situations you should also protect access to your GRUB2 menu by setting a password and may be refining acces by adding --unrestricted or --users as arguments to menuentries und submenus. I read the corresponding pages in the Ubuntu Community Documentation and the Arch Wiki. So, I created /etc/grub.d/01_security, stored usernames and passwords in there, made the file executable and ran update-grub. This is working as intended, every action in the menu prompts for username and password, but I also want to modify the automatically generated entries to either restrict them to certain users (via --users) or make them available for everyone, but not editable by everyone (via --unrestricted). I was able to find the proper lines in 10_linux and edit them accordingly, however I'd love to see an easier solution. Perhaps an option like GRUB_DISABLE_RECOVERY="true" or GRUB_DISABLE_OS_PROBER=true in /etc/default/grub for easy (re)configuration (for linux and os-prober generated entries). Here's a diff from my 13.10 installation: $ diff /etc/grub.d/10_linux /etc/grub.d/10_linux_bak 123c123 < echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} --unrestriced \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^$ --- > echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_inde$ 125c125 < echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} --unrestricted \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_$ --- > echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/" 323c323 < echo "submenu --unrestricted '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_$ --- > echo "submenu '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {" tl;dr: I'd love the see a simple solution for GRUB2 entries that cannot be modified without a password or are limited to certain users. (Yes, GRUB_DISABLE_RECOVERY="true" is active.)

    Read the article

  • Cannot get temperatures in Dell Studio 1558

    - by Athul Iddya
    I could never get proper temperatures on my Dell Studio 1558. lm-sensors and acpi give wrong readings. The output of sensors is, $ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) acpi -V gives me, $ acpi -V Battery 0: Full, 100% Battery 0: design capacity 414 mAh, last full capacity 369 mAh = 89% Adapter 0: on-line Thermal 0: ok, 0.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 0: trip point 1 switches to mode passive at temperature 95.0 degrees C Thermal 0: trip point 2 switches to mode active at temperature 71.0 degrees C Thermal 0: trip point 3 switches to mode active at temperature 55.0 degrees C Thermal 1: ok, 26.8 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: trip point 1 switches to mode active at temperature 71.0 degrees C Thermal 1: trip point 2 switches to mode active at temperature 55.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 Cooling 3: Processor 0 of 10 Cooling 4: Processor 0 of 10 Cooling 5: Fan 0 of 1 Cooling 6: Fan 0 of 1 I suspect even hddtemp gives bogus readings as its always at 46 $ sudo hddtemp /dev/sda /dev/sda: ST9500420AS: 46°C I have gone through some bug reports and some used to have the same problem after resuming from suspend. But I always have this problem. I had updated to the latest BIOS from Windows a couple of weeks ago, will updating from Ubuntu change anything? CORRECTION: hddtemp's readings do change. Its now at 45.

    Read the article

  • What can be the cause of new bugs appearing somewhere else when a known bug is solved?

    - by MainMa
    During a discussion, one of my colleagues told that he has some difficulties with his current project while trying to solve bugs. "When I solve one bug, something else stops working elsewhere", he said. I started to think about how this could happen, but can't figure it out. I have sometimes similar problems when I am too tired/sleepy to do the work correctly and to have an overall view of the part of the code I was working on. Here, the problem seems to be for a few days or weeks, and is not related to the focus of my colleague. I can also imagine this problem arising on a very large project, very badly managed, where teammates don't have any idea of who does what, and what effect on other's work can have a change they are doing. This is not the case here neither: it's a rather small project with only one developer. It can also be an issue with old, badly maintained and never documented codebase, where the only developers who can really imagine the consequences of a change had left the company years ago. Here, the project just started, and the developer doesn't use anyone's codebase. So what can be the cause of such issue on a fresh, small-size codebase written by a single developer who stays focused on his work? What may help? Unit tests (there are none)? Proper architecture (I'm pretty sure that the codebase has no architecture at all and was written with no preliminary thinking), requiring the whole refactoring? Pair programming? Something else?

    Read the article

  • Inheritance vs containment while extending a large legacy project

    - by Flot2011
    I have got a legacy Java project with a lot of code. The code uses MVC pattern and is well structured and well written. It also has a lot of unit tests and it is still actively maintained (bug fixing, minor features adding). Therefore I want to preserve the original structure and code style as much as possible. The new feature I am going to add is a conceptual one, so I have to make my changes all over the code. In order to minimize changes I decided not to extend existing classes but to use containment: class ExistingClass { // .... existing code // my code adding new functionality private ExistingClassExtension extension = new ExistingClassExtension(); public ExistingClassExtension getExtension() {return extension;} } ... // somewhere in code ExistingClass instance = new ExistingClass(); ... // when I need a new functionality instance.getExtension().newMethod1(); All functionality that I am adding is inside a new ExistingClassExtension class. Actually I am adding only these 2 lines to each class that needs to be extended. By doing so I also do not need to instantiate new, extended classes all over the code and I may use existing tests to make sure there is no regression. However my colleagues argue that in this situation doing so isn't a proper OOP approach, and I need to inherit from ExistingClass in order to add a new functionality. What do you think? I am aware of numerous inheritance/containment questions here, but I think my question is different.

    Read the article

  • htaccess correct, Apache logs still showing the evil visitors with 200 code

    - by bulgin
    I hope someone can help me. Please take a look at the following snippet of Apache logs: 95-169-172-157.evilvisitor.com - - [12/Nov/2012:09:46:02 -0500] "GET /the-page-I-dont-want-to-deliver.html HTTP/1.1" 200 9171 "http://hackers.ru/" "Mozilla/4.0 (MSIE 6.0; Windows NT 5.1; Search)" I have the following included in my .htaccess for the root directory of the website and there are no other .htaccess files anywhere that would affect this: RewriteEngine On Options +FollowSymLinks ServerSignature Off ErrorDocument 403 "Nothing Interesting Here" order allow,deny deny from evilvisitor.com deny from hackers.ru deny from anonymouse.org allow from all I also have GeoIP functioning properly and have this included there: #for stuff from different countries RewriteCond %{ENV:GEOIP_COUNTRY_CODE} ^(UA|TR|RU|RO|LV|CZ|IR|HR|KR|TW|NO|NL|NO|IL|SE) RewriteRule ^(.*)$ [R=F,L I know this works because whenever I attempt to access the website from a proxy in say, Spain, I get the error message. I also know it works because when accessing the website from anonymouse.org, the proper error code page is displayed. So then why am I still getting these visitors who successfully access the page I don't want them to see with an Apache 200 code when it should be an error code?

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

  • invitation: Oracle Endeca Information Discovery Bootcamp

    - by mseika
    The Oracle Endeca Information Discovery (OEID) Boot Camp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method. Training is in EnglishWhat will be covered?The Oracle Endeca Information Discovery (OEID) Boot Camp is a three-day class with a combination of lecture and hands-on exercises, tailored to make participants aware of the Oracle Endeca Information Discovery platform, and to gain valuable skills for the implementation of projects.The course will follow a combination of lectures and hands-on lab sessions, to allow participants to apply the knowledge they have gained by extracting from sample data sources, and creating an end-user application that will be used to answer several business questions. What You Will Learn Architecture: OEID Components, use of graphs, overview of clustering OEID Installation: Architecture planning, infrastructure requirements, installation process, production hints & tips OEID Administration: Data store management, administrative operations, portal configuration, data sources, system monitoring Indexing: Integration Suite, Data source analysis, Graph (ETL) creation, record design techniques Portlets: Studio portlets, custom portlet development, querying functions Reporting: Studio applications & best practices, visualizations, EQL PrerequisitesYou must bring a laptop with you for the Hands-on labs ENVIRONMENT – LAPTOP REQUIREMENTS For the OEID boot camp, participants will perform the hands-on lab exercises using a virtual machine image. These virtual machines will be provided to participants within a cloud environment, requiring participants to bring a laptop to the Boot Camp that can access a Windows server utilizing Microsoft RDP from their laptop. Participants will not need to install any software onto their laptops, but must ensure that they have the proper software installed for their OS, to connect through RDP to a server. HARDWARE • CPU: Dual-core, x64, 1.8Ghz or higher • RAM: 2GB SOFTWARE • Microsoft Remote Desktop Client • Internet Explorer 7, Firefox, or Google Chrome This boot camp is intended for prospective implementers of Oracle Endeca Information Discovery (OEID), or those in a presales role looking to gain insight into the technical benefits of this new package. Attendees should have experience and familiarity with the basic concepts of business intelligence. Where and When ? Monday, October 15th until wednesday, October 17th included 9:00 - 18:00 Oracle France 15, boulevard Charles de Gaulle 92715 Colombes Access Register Here Limited number of seats !

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

  • Installing Ubuntu Server 12.04 as a software RAID 1 mirror fails to boot

    - by Jeff Atwood
    I'm installing a few new Ubuntu Server 12.04 LTS servers, and they have two 512 GB SSDs. I want them to use software RAID 1 mirroring, so I was following this document religiously step by step: https://help.ubuntu.com/12.04/serverguide/advanced-installation.html To summarize the above official documentation: to set up a software RAID 1 mirror in Ubuntu Server, you choose manual partitioning during the setup, and do this on each drive: "swap" partition of roughly RAM size "physical volume for RAID" partition for remaining drive size After that, you set up the RAID 1 mirror using the RAID partitions on drive A and B, make it ext4 and containing the root filesystem partition. Setup continues from there just fine. One caveat: I was completely unable to select the "physical volume for RAID" as bootable. When I tried to do that in setup, it had no effect: I could press enter on the "make bootable" option all day long and nothing would ever change. However, after install successfully completes, I have a big problem: the system won't boot! I get Reboot and Select proper boot device or Insert Boot Media in selected Boot device and press a key What did I do wrong? Why can't I mark that "physical volume for RAID" partition bootable during Ubuntu Server setup? Is there some way for me to make the physical volumes for RAID bootable after the fact, perhaps from a live CD or something?

    Read the article

  • Am I an idealist?

    - by ereOn
    This is not only a question, this is also a call for help. Since I started my career as a programmer, I always tried to learn from my mistakes. I worked hard to learn best-practices and while I don't consider myself a C++ expert, I still believe I'm not a beginner either. I was recently hired into a company for C++ development. There I was told that my way to work was "against the rules" and that I would have to change my mind. Here are the topics I disagree with my hierarchy (their words): "You should not use separate header files for your different classes. One big header file is both easier to read and faster to compile." "Trying to use different headers is counter-productive : use the same super-set of headers everywhere, and enforce the use #pragma hdrstop to hasten compilation" "You may not use Boost or any other library that uses nested directories to organize its files. Our build-machine doesn't work with nested directories. Moreover, you don't need Boost to create great software." One might think I'm somehow exaggerated things, but the sad truth is that I didn't. That's their actual words. I believe that having separate files enhance maintainability and code-correctness and can fasten compilation time by the use of the proper includes. Have you been in a similar situation? What should I do? I feel like it's actually impossible for me to work that way and day after day, my frustration grows.

    Read the article

  • Frustrated with MythTV 0.26

    - by Mike
    I've been using MythTV for a while now. Until a hardware crash I had a 0.25 box running without problems. Had to get new hardware and am now in the process of setting up 0.26. Every time I pick a menu option, it hangs for 1 minute. Every time I try to start the backend, same thing. I pick a new theme in the frontend, but it never gets used. I try to test audio, but all I get is static (from the proper channels though). I've setup the storage groups in the backend and put a video in /storage/videos but the front end won't see it when I scan for changes. I make changes in the front end configuration and they don't get saved (or get lost randomly 2-3 reloads later). Obviously there is something I am doing catastrophically wrong, but I have no idea what. Are the storage groups not working yet? Maybe I need to delete all the storage group entries and just use the front end override to set the path? I'm currently using lvm across 3 hard disks, and this has worked well for me in the past. I'd like to use storage groups, but frankly I don't see them working yet at all - especially not for videos (which is what we watch 99% of the time). Anyone have any suggestions for me to try before I just call 0.26 bad names and wipe the system?

    Read the article

  • What kind of physics to choose for our arcade 3D MMO?

    - by Nick
    We're creating an action MMO using Three.js (WebGL) with an arcadish feel, and implementing physics for it has been a pain in the butt. Our game has a terrain where the character will walk on, and in the future 3D objects (a house, a tree, etc) that will have collisions. In terms of complexity, the physics engine should be like World of Warcraft. We don't need friction, bouncing behaviour or anything more complex like joints, etc. Just gravity. I have managed to implement terrain physics so far by casting a ray downwards, but it does not take into account possible 3D objects. Note that these 3D objects need to have convex collisions, so our artists create a 3D house and the player can walk inside but can't walk through the walls. How do I implement proper collision detection with 3D objects like in World of Warcraft? Do I need an advanced physics engine? I read about Physijs which looks cool, but I fear that it may be overkill to implement that for our game. Also, how does WoW do it? Do they have a separate raycasting system for the terrain? Or do they treat the terrain like any other convex mesh? A screenshot of our game so far:

    Read the article

  • Having a Proactive Patch Plan is the way to Go!

    - by user793553
    BUILDING A SUCCESSFUL PATCHING STRATEGY Make Patching Easy! Having a Patching Strategy for your E-Business Suite system is a great way to manage your system downtime, identify the proper resources needed to perform the necessary task and familiarizing yourself with the Patching Tools in EBS. Having a Proactive Patch Plan is the way to Go! Proactive Patching is a preventive measure allowing you to have a complete patching strategy when applying patches periodically. Oracle provides several tools to help you get started to set the foundation for a solid and proactive patching strategy in Note 313.1 - "Patching & Maintenance Advisor: E-Business Suite 11i and R12". It details all the steps and tooling available for the patching strategy along with the benefits. Among other things it covers the following: How to plan ahead for system downtime Patching Tools in E-Business Suite (Autopatch, OUI, OPatch) How to Identify Patches (RUPs, EBS Family Packs, Critical Patch Updates, etc) How to properly test your patching plan and move to Production Make sure you visit the New E-Business Patching Community! We encourage you to access the "E-Business Patching Community" prior to applying an E-Business Suite patch. Doing so will allow you to explore perspectives shared by industry peers, get real-world experiences with the patch, and benefit from known solutions and lessons learned. Additionally, Oracle Support engineers monitor discussion topics to help provide guidance and solutions for your E-Business Suite patching needs. This is a valuable opportunity to "Get Proactive" with the patching and maintenance of your E-Business Suite environment. Start now, and find fast, proactive resolutions before you begin. Related Articles: What's the Best Way to Patch an E-Business Suite Environment? Patch Wizard Utility

    Read the article

  • Every file on cPanel got deleted (then restored hours later), and I have no idea why

    - by mcranston18
    I apologize in advance if I don't provide proper detail; I am new to server stuff and am looking for general advice about this issue: I was helping out a client doing web design last month. They have about a dozen static sites on one server. The sites are all built on Joomla, except one which I built on Wordpress. Everything was working fine last month when we did the redesign but all of a sudden this morning, every single file on their server got deleted: every web page, file, and all e-mail addresses. I phoned the hosting company (alliancewww.com) to ask, "why did every single file suddenly delete off the server?" They said, "because someone must have deleted it." I said, "well no one did." (Which I'm pretty damn sure no one did.) They said, "you can pay us to look into the problem." I authorized $150 for them to look into the problem. About an hour later, everything was magically re-instated. The host said they had a back-up of everything and just restored everything. What I'm wondering: Does anyone have recommendations of logs I can go through to investigate how the files got deleted in the first place? I've checked out their cPanel logs but found nothing. Is it likely that this is a mess-up on the host's part?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >