Search Results

Search found 234 results on 10 pages for 'allan deamon'.

Page 9/10 | < Previous Page | 5 6 7 8 9 10  | Next Page >

  • bluetooth daemon not running at startup

    - by ffaxer
    I'm trying to connect a bluetooth mouse to my Xubuntu system using Blueman (v. 1.21) Problem seems to be bluetoothd not running at startup, so blueman refuses to start, only a dialog appears: "Bluez daemon is not running, blueman-manager cannot continue." On my system, bluetoothd will run only as root (sudo), so my current workaround is simply to sudo bluetoothd manually, which works fine but id like to have it run at startup so that my mouse is just working without any interaction from me, if possible. If i try to start bluetoothd as non-root it reports: Bluetooth deamon 4.91 Unable to get on D-Bus In the startup scripts i found the same bluetoothd script in all runlevels and init.d: DAEMON=/usr/sbin/bluetoothd test -f /usr/sbin/bluetoothd || exit 0 # bluetoothd normally starts up by udev rules. it needs dbus to function, log_progress_msg "bluetoothd" pkill -TERM bluetoothd || true log_progress_msg "bluetoothd" I looked in /etc/udev/rules.d/ but no reference to bluetoothd. Further i have already tried with no luck: Editing /etc/dbus-1/system.d/bluetooth.conf to include my user (essentially copying the part that was for root): I tried it while both keeping the root policy and without, still, no luck! Editing /etc/pam.d/common-session and /etc/pam.d/gdm to include the line: session optional pam_ck_connector.so In the case of common-session it was already there but with a "nox11" which i tried removing. No luck no luck. Btw, I'm confused as to which session manager I'm using, since i have both xfce4-session and gdm-session-worker running. Anyways, hope someone is savvy enough to figure it out or bring some hints, otherwise i sincerely apologize for wasting your time! I'll sign off with uname -a: Linux [mycompname] 3.0.0-9-lowlatency #12ppa1~natty1-Ubuntu SMP PREEMPT Mon Aug 22 06:52:15 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Peace B)

    Read the article

  • Ubuntu 12.04 won't shut down - stopping winbind daemon

    - by jan
    My Precise Pangolin sometimes won't shut down - the screen is black with text on it. Mostly last line says something like "stopping winbind deamon" (sometimes also virtualbox, which is above winbind daemon; edit: sometimes the last line says "running unattended updates") and it stays like this for about ten miutes. Then I usually hold the power button for 5s to shut it down. It's very unpredictable - sometimes the computer shuts down without problem and sometimes it hangs. I've tried many ways to shut it down: HW button, panel applet, sudo shutdown -h now, sudo poweroff, sudo halt, etc. even sudo reboot or restart from panel applet have this problem. Sometimes it works ok but every method named hung at least once on the same (damned) line. My specs: FUJITSU SIEMENS LIFEBOOK E8310, Intel Core2 Duo T7300 @ 2.00GHz, 3GB RAM, GPU: Mobile Intel(R) 965 Express Chipset Family Ubuntu 12.04.2 32bit, 3.5.0-41-generic kernel (but it did it on older kernels and 12.04.x systems too). Any ideas what should I try next? Thanks a lot! Jan

    Read the article

  • CodePlex Daily Summary for Saturday, March 13, 2010

    CodePlex Daily Summary for Saturday, March 13, 2010New Projects[Experiment] vczh DBUI: EXPERIMENT. Auto UI adaptor for a specified data base[SAMPLE PROJECT] Branch and Patch with GIT: I'll paste more here later...BlogPress: This web application provides a content management system. Coot: Facebook Photos Screensaver.DotNetNuke® JDMenu: dnnJDMenu makes it easy to use the open source JDMenu component in your DotNetNuke skin.DotNetNuke® RadRotator: dnnRadRotator makes it easy to add telerik RadRotator functionality to your site or custom module. Licensing permits anyone to use the components ...DotNetNuke® RadTabStrip: DNNRadTabStrip makes it easy to add telerik RadTabStrip functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® RadTreeView: DNNRadTreeView makes it easy to add telerik RadTreeView functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Garden: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Mark Allan of DnnGarden. Concise and semantic HTML 5 stru...ElmasFC: Elmas Facebook CheckerFront Callback Protocol: Front Callback Protocol make it easier for developers to create a network application in the .Net Language. It provide the flexibility of using TCP...Glass UI: A Windows Forms Control Library built specifically for Aero Glass. This will consist of existing controls, modified to render on glass, as well as ...Guitarist Tools: Some tools for guitarists.HomeUX: HomeUX is home control software featuring a Silverlight-based touch screen user interface.Homework Helper: Homework Helper help students to train simpler homework like "What's the name of the capital of France?" - Question - Answear based. It's made i...IGNOU Mini Project Allocation: A mini project allocation system.Images Compiler: Images Compiler is pretty small wpf application for users who want to resize, rename, disable colors... for too much images in very short time. It...InstitutionManagementSystem: Institution Management System prototypeManagedCv: ManagedCv is the library for C#/VB/ to use the OpenCV librarymanagement joint ownership: Application pour la gestion des actions réalisées au sein d’une copropriété. Application for management actions within a joint ownershipMapWindow6: MapWindow 6.0 is an open source geographic information system (GIS) and library of geospatial software development tools for .NET, written in C#. T...Morris Auto: Morris Auto is a social application build templateMouse control with finger detection: The aim of this project is to design an application for recognition of the movement of a finger through a webcam and then control the mouse. The pr...ORAYLIS BI.Quality: ORAYLIS BI.Quality makes it easier to develop BI Solutions in an agile environment. It adapts Unit Tests to BI Development. Propositional Framework: Different sales propositions are presented to the user using data collected from marketing intelligence, browser history or user behaviour as they ...ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them ...shatkotha web portal: source code of web portal shatkothaSurvey and Registration tools: Two Silverlight tools for add Survey and Registration in your websiteTable Storage Backup & Restore for Windows Azure: Allows various backup & restore options for Windows Azure Table Storage accounts: - Backup from one storage account to another - Backup a storag...Topsy Lib: This project provides wrapper methods to call Topsy's web services from C#.twNowplaying: twNowplaying is a small and handy utility that allows you to tweet your currently playing track from Spotify to Twitter. Followed by the #twNowplay...XOM: XOM: Xml Object Mapper Automatically populate your objects with data from XML via an XML map. Never do this again: if(e.GetElementsByTagName("us...YS Utils: The library contains set of useful classes which may solve common tasks for different applications.ZabViewer: CS ProjectNew ReleasesAppFabric Caching Admin Tool: AppFabric Caching Admin Tool (0.8): System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)ASP.Net Routing Configuration: mal.Web.Routing v0.9.2.0: mal.Web.Routing v0.9.2.0DotNetNuke IM Module of Facebook Like Messenger: DNN IM Module of Facebook Like Messenger: Empower DNN Website with a 1-to-1 Chat Solution named Free Facebook Messenger Style Web Chat Bar of 123 Web Messenger. 1. If you need to use the c...DotNetNuke® Form and List (formerly User Defined Table): 05.01.02: Form and List 05.01.02 (Release Candidate 2)5.1.2 will be the next stabilzation release. Major Highlights fixed Cancel action in a form, it don't ...DotNetNuke® JDMenu: DNN jdMenu 1.0.0: dnnJDMenu makes it easy to use the open source JDMenu component in your DotNetNuke skin. Many thanks to Jonathan Sharp of Outwest Media for creati...DotNetNuke® RadTabStrip: DNNRadTabstrip 1.0.0: DNNRadTabStrip makes it easy to add telerik RadTabStrip functionality to your module or skin. Licensing permits anyone to use the components (inclu...DotNetNuke® RadTreeView: DNNRadTreeView 1.0.0: DNNRadTreeView makes it easy to create skins which use the Telerik RadTreeview functionality. Licensing permits anyone (including designers) to use...DotNetNuke® Skin Garden: Garden Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Mark Allan of DnnGarden. Concise and semantic HTML 5 struc...ElmasFC: Elmas FC: This porgram is checking your facebook account automaticly.Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: Free Download DNN IM Module and Source Code: 123 Web Messenger offers free DotNetNuke Instant Messaging to help you embed a IM Software into DNN Website by integrating 123 Web Messenger with D...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.4 Released: Hi, Today we have released the final version of Visifire v3.0.4 which contains the following major features: * Zooming * Step Line chart ...Home Access Plus+: v3.1.0.0: Version 3.1.0.0 Release Problem with this release, get Version 3.1.1.1 Change Log: Fixed ampersand issue Added Unzip Features Added Help Desk ...Home Access Plus+: v3.1.1.1: Version 3.1.1.1 Release Change Log: Fixed the Help Desk File Changes: ~/App_Data/Tickets.xml ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdbHomework Helper: Homework Helper 1.0: The first release of Homework Helper. It got basic functionality. Check the Documentation or press F1 while using the program.Homework Helper: Homework Helper v1.0: This is the latest version of Homework Helper. Just a few updates since the latest one (a couple of minutes ago). Now it saves your settings! Instr...IBCSharp: IBCSharp 1.02: What IBCSharp 1.02.zip unzips to: http://i39.tinypic.com/28hz8m1.png Note: The above solution has MSTest, Typemock Isolator, and Microsoft CHESS c...InfoService: InfoService v1.5 RC 1: InfoService Release Canidate Please note this is a RC. It should be stable, but i can't guarantee that! So use it on your own risk. Please read Pl...IntX: IntX 0.9.3.3: Fixed issue #7100IronRuby: 1.0 RC3: The IronRuby team is pleased to announce version 1.0 RC3! As IronRuby approaches the final 1.0, these RCs will contain crucial bug fixes and enhanc...MAISGestão: LayerDiagram (pdf): This is the final version of the layer diagramMapWindow6: MapWindow 6.0 msi (March 12): After some significant trouble with svn, this site represents a test of using Mercurial for our version control. Hopefully the choice of this new ...MSBuild Mercurial Tasks: 0.2.0 Beta: This release realises the Scenario 2 and provides two MSBuild tasks: HgCommit and HgPush. This task allows to create a new changeset in the current...NLog - Advanced .NET Logging: NLog 2.0 preview 1: This is very experimental first build of NLog from 2.0 branch is available for download. It’s not alpha, beta or even gamma, just a preview of upco...Open NFe: Fonte DANFE v1.9.5: Código-fonte para a versão 1.9.5 do DANFEOpiConsole (XNA): OpiConsole 0.3: OpiConsole 0.3 OpiConsole includes: * PC & Xbox support. * Default commands (cvar, quit etc.) * Ability to add custom commands. *...ORAYLIS BI.Quality: Release 1.0.0: Release 1.0.0Pcap.Net: Pcap.Net 0.5.0 (38141): Pcap.Net - March 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and include...PhysX.Net: PhysX.Net 0.12.0.0 Beta 1: This is beta release of 0.12.0.0 for people to test and provide feedback on. It targets 2.8.3.21 of PhysXPólya: Pólya 2010 03 13 alpha: Pólya is a collection of generic data structures; as generic as C#/.Net allows them to be. In this first release the focus has been on some fundam...Prolog.NET: Prolog.NET 1.0 Beta 1.3: Installer includes: primary Prolog.NET assembly Prolog.NET Workbench Prolog.NET Scheduler sample application PrologTest console applicatio...Propositional Framework: USP: The initial code drop was based on the Autocomplete demo and the neural network approach used by Jeff Heaton (@ http://www.heatonresearch.com/) - e...Protocol Transition with BizTalk: Source: Stable buildRoTwee: RoTwee (7.1.0.0): Now picture of your friend is shown in RoTwee. Place mouse cursor to the tweet !ServStop: 0.5.0.0: Initial Release Contents of ServStop0500.zip ServStop.exe 0.5.0.0 Initial Release. 32,768 bytes. Other files None. Source code available on "...SkeinLibManaged: Release 1.0.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version...sPWadmin: pwAdmin v0.9a: Added: LiveChat Plugin Shows the latest 150 chat messages from "/Your PW Server/logservice/logs/world2.chat" Refresh chat every 5 seconds Allow...sPWadmin: pwAdmin v0.9b: Some minor fixes to style, layout & different browser support Merged the forms in server configuration tab to a single form that can now save mul...Topsy Lib: Initial release: Initial Debug and Release versions.twNowplaying: twNowplaying: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. What's new This release has some minor UI fixes.twNowplaying: twNowplaying Alpha 1.0: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. Thank you , and enjoy! =)VCC: Latest build, v2.1.30312.0: Automatic drop of latest buildWPF Dialogs: Version 0.1.3: The FolderBrowseDialog / FolderBrowseDialog - Deutsch was improved and extended.XOM: XOM 0.1A: Just a release of the code I have so far. In order to get your objects to work, you need to create an XML file to define your objects. Here is a ...Xpress - ASP.NET MVC 个人博客程序: xpress2.1.1.0312.beta: 最新beta版,注意:此版本和2.1.0不兼容 更改内容: 将主题文件发放在 Views 文件夹下 主题文件支持强类型Model 主题资源文件放在Resouces目录下YS Utils: V 1.0.0.0: This is first release. The YSUtils library contains first set of classes. ZIP files contains documentation.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIpatterns & practices – Enterprise LibraryFarseer Physics EngineSharePoint Team-MailerCaliburn: An Application Framework for WPF and SilverlightCalcium: A modular application toolset leveraging PrismjQuery Library for SharePoint Web Services

    Read the article

  • Installing Daemons

    - by Shahmir Javaid
    Dear Stackoverflowers, A simple link would be nice for me to understand how to install my C++ program as a deamon in UNIX, now i know some will say this should be on SERVERFAULT but as far as i understand it i need the init.d shell script to actually create the start and stop for the daemons. But if you guys can show me a simple shell script for the daemon and the file directories every thing required is associated with, that would be great. I was going to do this http://www.linux.com/archive/feed/46892 but if you read the comments every one is moaning x( . P.S. ive already done the required code for C++ to run as a daemon i just need to know how to actually install it as a daemon. At the moment im using crontab which is just not a good idea for the future of my problem. Thanks in advance

    Read the article

  • missing entry in launchctl after restart

    - by Nobik
    I have a deamon which is registered with launchctl to run as system-wide-daemon and to load automatically with every system startup or if the daemon crashes. I have registered this daemon with: sudo launchctl load -w /Library/LaunchDaemons/plist.file Everything works fine. My daemon is registered and with sudo launchctl list I can find the entry at launchctl But on some Macs after the user restarts the system, my daemon is not running. And with the command sudo launchctl list I can't find the entry anymore. Any ideas, why the entry is missing???

    Read the article

  • How to trace a raw (character) device stream on Unix ?

    - by Fabien
    I'm trying to trace what is transiting in a raw (character) device on an Unix system (ex: /dev/tty.baseband) for DEBUG purpose. I am thinking of creating a deamon that would: upon start rename /dev/tty.baseband to /dev/tty.baseband.old. create a raw node /dev/tty.baseband spawn two threads: Thread 1: reading /dev/tty.baseband.old writing into /dev/tty.baseband Thread 2: reading /dev/tty.baseband writing into /dev/tty.baseband.old This would work a little bit like a MITM process. I wonder if there is not a 'standard' way to do this.

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Fans running very fast on MacBook Pro 8.1 ubuntu 12.04

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • Fans running very fast on MacBook Pro 8.1

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • (13) Permission denied on Apache CGI attempt

    - by ncv
    I have recently upgraded my Apache2 server, and am now unable to run a CGI app. My logs are showing (13) Permission denied unable to connect to cgi deamon after multiple tries I understand that the error message means Apache is being denied some permission to some file, and I'm stumped as to how to track down and solve the problem. Is the file mentioned in the error message truly the blocked file? Or might the problem be caused by some other needed file? The .cgi file is right where it has always been, under /usr/share. The file ownership (root) and permissions (world readable/executable) are the same as they have always been for the file and its ancestors. The SELinux file labels are unchanged. The SELinux audit log shows no denial associated with Apache nor the CGI program. In case of a donotaudit condition, I enabled audit, but still saw nothing. I switched SELinux into permissive mode briefly, to no avail. I even tried restarting Apache while in permissive mode. This did not solve the problem. Any suggestions on how to solve this problem? I'm tempted to just revert to the older Apache.

    Read the article

  • Time-Machine backup over SSH tunnel to NFS mount

    - by BTZ
    I've recently started using a new NAS which runs CentOS 6.2. One of the purposes of the NAS would be to serve as a backup target. Whilst I have been using Apple's Time-Machine for a while and I am very satisfied with it, I'd like to continue using it. Backing up directly to an address in my network is no hassle; all works fine. For security reasons I'd like all my traffic to go through an ssh tunnel to the NAS. This way I can avoid needing to get a VPNserver (for personal reasons). As of NFSv4 the NFS deamon is bound to port 2049, which makes it easy for me to direct all traffic through a ssh tunnel. Tunnel: ssh -f admin@ms -L 2000:localhost:2049 -N Mount: mount -t nfs -o nfsvers=4,rw,proto=tcp,sync,intr,hard,timeo=600,retrans=10,wsize=32768,rsize=32768,port=2000 localhost:/mac_backup /Volumes/backup This works fine for Finder/terminal and throughput is almost equal to direct traffic. (CPU of the NAS does ride high when I reach max bandwidth though) Now the problem: With Time-Machine I can't use the NFS mount point mounted on localhost. TM seems to try to connect to it and then give me a "OSStatus error 65". I also tried using NFSv3 (I correctly forwarded all ports) with no luck. Can anyone shed a light on this and/or give a solution?

    Read the article

  • Debian Bluetooth headphones not working

    - by cYrus
    Hardware Headphones Bluetooth dongle Maybe not exactly these models. Setup I tried to follow some guides, here's what I've done so far: Install software: sudo apt-get install bluez-utils bluez-alsa Reboot (just to be sure): $ dmesg | grep -i bluetooth [ 20.268212] Bluetooth: Core ver 2.16 [ 20.268230] Bluetooth: HCI device and connection manager initialized [ 20.268233] Bluetooth: HCI socket layer initialized [ 20.268235] Bluetooth: L2CAP socket layer initialized [ 20.268239] Bluetooth: SCO socket layer initialized [ 20.284685] Bluetooth: RFCOMM TTY layer initialized [ 20.284692] Bluetooth: RFCOMM socket layer initialized [ 20.284693] Bluetooth: RFCOMM ver 1.11 [ 20.335375] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.335378] Bluetooth: BNEP filters: protocol multicast The deamon is running: $ /etc/init.d/bluetooth status [ ok ] bluetooth is running. Plug the dongle: $ dmesg | tail [...] [23108.352034] usb 5-2: new full-speed USB device number 2 using ohci_hcd [23108.571131] usb 5-2: New USB device found, idVendor=0a12, idProduct=0001 [23108.571136] usb 5-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [23108.629042] usbcore: registered new interface driver btusb Put the headphones in pairing mode, and try scanning: $ hcitool scan Scanning ... Found nothing. What's next? What should I try? I'll update this answer as soon as you provide me hints.

    Read the article

  • When should NTPd broadcast/broadcastclient be used instead of client/server or peer modes?

    - by Luke404
    The NTP deamon if often used in its simplest mode, which is client/server: you specify one or more server directives in your ntp.conf and your clients will use those servers. In addition to that, when you run your own NTP servers, it is good practice to peer them together, so if one of them looses connectivity to its upstream servers, it will get time from its peers. But NTPd can also work with broadcast and/or multicast distribution of time data, with the documentation stating: broadcast and multicast modes are intended for configurations involving one or a few servers and a possibly very large client population The documentation also says elsewhere: It is possible and frequently useful to configure a host as both broadcast client and broadcast server. A number of hosts configured this way and sharing a common broadcast address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance. I can see one obvious administrative benefit: you don't have to manually specify and update your list of NTP servers in the clients ntp.conf, so to me it looks tempting to use broadcast mode even for a small client population (say 5+ clients with 3~4 servers). I expect network traffic to be a little higher with broadcasts instead of client/server associations, but given the usual gigabit ethernet LAN the impact should be negligible unless you have a very very large number of hosts in the same broadcast domain. At the end of the day, when should broadcast mode be used or avoided? Are there pros and cons I haven't seen?

    Read the article

  • Installing sphinx on a web hosting server

    - by fiftyeight
    I want to install sphinx search on a web-hosting server. I'm on a Linux VPS with hostgator, but I never actually installed anything on a remote server so this will be a first time for me. If there's anyone here who installed sphinx it's really help me I had some problems when using sphinx on my PC with the permissions and the MySQL files, eventually I got it working on my PC. Anyway, I'd me really grateful if anyone can help me with some questions Do i need root access to install sphinx? I have root access to the server but I'd connect to it as a normal user since doing stuff as root is always less secure. can anyone tell me by what user I need to execute the indexer and the search daemon? Should I use root access in order to this? when I did it as a normal user on my PC it gave me some trouble with the PID file and the log files. last time I executed the search deamon I executed it as a normal user and it gave me some trouble, I created the folder /var/log/ for log files and did chmod 777 on it, but still when I executed the search daemon it created the PID file "searchd.pid" file but with no permissions for some reason, any idea why? Thanx in advance. fiftyeight

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • bluez 5.19 PS4 controller

    - by Athanase
    I currently have a problem when pairing my computer with a PS4 remote. On my Ubuntu 14.04 I removed everything related with bluez and bluetooth, and I built and installed bluez 5.19. Here are some useful command outputs: jean@system ~ hcitool hcitool - HCI Tool ver 5.19 jean@system ~ hcitool dev Devices: hci0 00:15:83:4C:0C:BB jean@system ~ bluetoothctl [bluetooth]# version Version 5.19 jean@system ~ bluetoothctl [NEW] Controller 00:15:83:4C:0C:BB BlueZ [default] jean@system ~ lsusb Bus 003 Device 012: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) So here is what happens. When I try to hard pair the controller with the computer, by holding the share and ps button for a while, everything works as expected and the pairing is done properly. After a hard pairing if I try the pairing by pressing the ps button only, nothings happen. In order to go it, I first power up the bluetooth dongle: jean@system ~ sudo hciconfig hciX up and then I run the bluetooh deamon bluetoothd: jean@system /usr/libexec/bluetooth ~ ./bluetoothd -d -n bluetoothd[11270]: Bluetooth daemon 5.19 bluetoothd[11270]: src/main.c:parse_config() parsing main.conf bluetoothd[11270]: src/main.c:parse_config() discovto=0 bluetoothd[11270]: src/main.c:parse_config() pairto=0 bluetoothd[11270]: src/main.c:parse_config() auto_to=60 bluetoothd[11270]: src/main.c:parse_config() name=%h-%d bluetoothd[11270]: src/main.c:parse_config() class=0x000100 bluetoothd[11270]: src/main.c:parse_config() Key file does not have key 'DeviceID' bluetoothd[11270]: src/gatt.c:gatt_init() Starting GATT server bluetoothd[11270]: src/adapter.c:adapter_init() sending read version command bluetoothd[11270]: Starting SDP server bluetoothd[11270]: src/sdpd-service.c:register_device_id() Adding device id record for 0002:1d6b:0246:0513 ... bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_sink_server_probe() path /org/bluez/hci0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_source_server_probe() path /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:btd_adapter_unblock_address() hci0 00:00:00:00:00:00 bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_C1_0D_2A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_88_5E_9A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:load_link_keys() hci0 keys 2 debug_keys 0 bluetoothd[11270]: src/adapter.c:load_ltks() hci0 keys 0 bluetoothd[11270]: src/adapter.c:load_connections() sending get connections command for index 0 bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: src/adapter.c:set_did() hci0 source 2 vendor 1d6b product 246 version 513 bluetoothd[11270]: src/adapter.c:adapter_register() Adapter /org/bluez/hci0 registered bluetoothd[11270]: src/adapter.c:set_dev_class() sending set device class command for index 0 bluetoothd[11270]: src/adapter.c:set_name() sending set local name command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:adapter_start() adapter /org/bluez/hci0 has been enabled bluetoothd[11270]: src/adapter.c:trigger_passive_scanning() bluetoothd[11270]: plugins/hostname.c:property_changed() static hostname: system bluetoothd[11270]: plugins/hostname.c:property_changed() pretty hostname: bluetoothd[11270]: plugins/hostname.c:update_name() name: system bluetoothd[11270]: src/adapter.c:adapter_set_name() name: system bluetoothd[11270]: plugins/hostname.c:property_changed() chassis: desktop bluetoothd[11270]: plugins/hostname.c:update_class() major: 0x01 minor: 0x01 bluetoothd[11270]: src/adapter.c:load_link_keys_complete() link keys loaded for hci0 bluetoothd[11270]: src/adapter.c:load_ltks_complete() LTKs loaded for hci0 bluetoothd[11270]: src/adapter.c:get_connections_complete() Connection count: 0 And then I press the ps button of the PS4 controller bluetoothd[11270]: src/adapter.c:connected_callback() hci0 device A4:15:66:C1:0D:2A connected eir_len 5 bluetoothd[11270]: profiles/input/server.c:connect_event_cb() Incoming connection from A4:15:66:C1:0D:2A on PSM 17 bluetoothd[11270]: profiles/input/device.c:input_device_set_channel() idev (nil) psm 17 bluetoothd[11270]: Refusing input device connect: No such file or directory (2) bluetoothd[11270]: profiles/input/server.c:confirm_event_cb() bluetoothd[11270]: Refusing connection from A4:15:66:C1:0D:2A: unknown device bluetoothd[11270]: src/adapter.c:dev_disconnected() Device A4:15:66:C1:0D:2A disconnected, reason 3 bluetoothd[11270]: src/adapter.c:adapter_remove_connection() bluetoothd[11270]: plugins/policy.c:disconnect_cb() reason 3 bluetoothd[11270]: src/adapter.c:bonding_attempt_complete() hci0 bdaddr A4:15:66:C1:0D:2A type 0 status 0xe bluetoothd[11270]: src/device.c:device_bonding_complete() bonding (nil) status 0x0e bluetoothd[11270]: src/device.c:device_bonding_failed() status 14 bluetoothd[11270]: src/adapter.c:resume_discovery() So I don't know what is happening here and a bit of help would be appreciated.

    Read the article

  • SQLUG Events - London/Edinburgh/Cardiff/Reading - Masterclass, NoSQL, TSQL Gotcha's, Replication, BI

    - by tonyrogerson
    We have acquired two additional tickets to attend the SQL Server Master Class with Paul Randal and Kimberly Tripp next Thurs (17th June), for a chance to win these coveted tickets email us ([email protected]) before 9pm this Sunday with the subject "MasterClass" - people previously entered need not worry - your still in with a chance. The winners will be announced Monday morning.As ever plenty going on physically, we've got dates for a stack of events in Manchester and Leeds, I'm looking at Birmingham if anybody has ideas? We are growing our online community with the Cuppa Corner section, to participate online remember to use the #sqlfaq twitter tag; for those wanting to get more involved in presenting and fancy trying it out we are always after people to do 1 - 5 minute SQL nuggets or Cuppa Corners (short presentations) at any of these User Group events - just email us [email protected] removing from this email list? Then just reply with remove please on the subject line.Kimberly Tripp and Paul Randal Master Class - Thurs, 17th June - LondonREGISTER NOW AND GET A SECOND REGISTRATION FREE*The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance.The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:>> Debunk many of the ingrained misconceptions around SQL Server's behaviour >> Show you disaster recovery techniques critical to preserving your company's life-blood - the data >> Explain how a common application design pattern can wreak havoc in the database >> Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Where: Radisson Edwardian Heathrow Hotel, LondonWhen: Thursday 17th June 2010*REGISTER TODAY AT www.regonline.co.uk/kimtrippsql on the registration form simply quote discount code: BOGOF for both yourself and your colleague and you will save 50% off each registration – that’s a 249 GBP saving! This offer is limited, book early to avoid disappointment.Wed, 23 JunREADINGEvening Meeting, More info and registerIntroduction to NoSQL (Not Only SQL) - Gavin Payne; T-SQL Gotcha's and how to avoid them - Ashwani Roy; Introduction to Recency Frequency - Tony Rogerson; Reporting Services - Tim LeungThu, 24 JunCARDIFFEvening Meeting, More info and registerAlex Whittles of Purple Frog Systems talks about Data warehouse design case studies, Other BI related session TBC Mon, 28 JunEDINBURGHEvening Meeting, More info and registerReplication (Components, Adminstration, Performance and Troubleshooting) - Neil Hambly Server Upgrades (Notes and Best practice from the field) - Satya Jayanty Wed, 14 JulLONDONEvening Meeting, More info and registerMeeting is being sponsored by DBSophic (http://www.dbsophic.com/download), database optimisation software. Physical Join Operators in SQL Server - Ami LevinWorkload Tuning - Ami LevinSQL Server and Disk IO (File Groups/Files, SSD's, Fusion-IO, In-RAM DB's, Fragmentation) - Tony RogersonComplex Event Processing - Allan MitchellMany thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com"

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Is there a perfect algorithm for chess?

    - by Overflown
    Dear Stack Overflow community, I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough. I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do. My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic. Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though. -- thanks, Allan Edit: Hmm... looks like I ruffled some feathers here. That's good. Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally). I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.

    Read the article

  • MySQL 5.5 - Lost connection to MySQL server during query

    - by bully
    I have an Ubuntu 12.04 LTS server running at a german hoster (virtualized system). # uname -a Linux ... 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I want to migrate a Web CMS system, called Contao. It's not my first migration, but my first migration having connection issues with mysql. Migration went successfully, I have the same Contao version running (it's more or less just copy / paste). For the database behind, I did: apt-get install mysql-server phpmyadmin I set a root password and added a user for the CMS which has enough rights on its own database (and only its database) for doing the stuff it has to do. Data import via phpmyadmin worked just fine. I can access the backend of the CMS (which needs to deal with the database already). If I try to access the frontend now, I get the following error: Fatal error: Uncaught exception Exception with message Query error: Lost connection to MySQL server during query (<query statement here, nothing special, just a select>) thrown in /var/www/system/libraries/Database.php on line 686 (Keep in mind: I can access mysql with phpmyadmin and through the backend, working like a charme, it's just the frontend call causing errors). If I spam F5 in my browser I can sometimes even kill the mysql deamon. If I run # mysqld --log-warnings=2 I get this: ... 120921 7:57:31 [Note] mysqld: ready for connections. Version: '5.5.24-0ubuntu0.12.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 05:57:37 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346679 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f1485db3b20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f1480041e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f1483b96459] mysqld(handle_fatal_signal+0x483)[0x7f1483a5c1d3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f1482797cb0] /lib/x86_64-linux-gnu/libm.so.6(+0x42e11)[0x7f14821cae11] mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1368)[0x7f1483b26cb8] mysqld(+0x33116a)[0x7f148397916a] mysqld(_ZN4JOIN8optimizeEv+0x558)[0x7f148397d3e8] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0xdd)[0x7f148397fd7d] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f1483985d2c] mysqld(+0x2f4524)[0x7f148393c524] mysqld(_Z21mysql_execute_commandP3THD+0x293e)[0x7f14839451de] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f1483948bef] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1365)[0x7f148394a025] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f14839ec7cd] mysqld(handle_one_connection+0x50)[0x7f14839ec830] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f148278fe9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1481eba4bd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f1464004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED From /var/log/syslog: Sep 21 07:17:01 s16477249 CRON[23855]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Sep 21 07:18:51 s16477249 kernel: [231923.349159] type=1400 audit(1348204731.333:70): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=23946 comm="apparmor_parser" Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23990]: Upgrading MySQL tables if necessary. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysql' as: /usr/bin/mysql Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: This installation of MySQL is already upgraded to 5.5.24, use --force if you still need to run mysql_upgrade Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24004]: Checking for insecure root accounts. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24009]: Triggering myisam-recover for all MyISAM tables I'm using MyISAM tables all over, nothing with InnoDB there. Starting / stopping mysql is done via sudo service mysql start sudo service mysql stop After using google a little bit, I experimented a little bit with timeouts, correct socket path in the /etc/mysql/my.cnf file, but nothing helped. There are some old (from 2008) Gentoo bugs, where re-compiling just solved the problem. I already re-installed mysql via: sudo apt-get remove mysql-server mysql-common sudo apt-get autoremove sudo apt-get install mysql-server without any results. This is the first time I'm running into this problem, and I'm not very experienced with this kind of mysql 'administration'. So mainly, I want to know if anyone of you could help me out please :) Is it a mysql bug? Is something broken in the Ubuntu repositories? Is this one of those misterious 'use-tcp-connection-instead-of-socket-stuff-because-there-are-problems-on-virtualized-machines-with-sockets'-problem? Or am I completly on the wrong way and I just miss-configured something? Remember, phpmyadmin and access to the backend (which uses the database, too) is just fine. Maybe something with Apache? What can I do? Any help is appreciated, so thanks in advance :)

    Read the article

  • Bluetooth DUN Tethering fails

    - by tacone
    I have an HTC Desire HD, with Android Froyo (2.2) and PDANet installed. I am using Ubuntu 10.10. I cannot tether it over Bluetooth either with Network Manager or BlueMan. (note, I installed Blueman only after failing with NetWork manager, and I even tried the last version from the PPA). With both my device is discovered, paired, setup. But connecting always fail. Network manager says it cannot get the details of my device Blueman says Connection Refused (111) Here are some relevant entries from syslog. Mar 11 22:13:00 tacone-macbook bluetoothd[2242]: Bluetooth deamon 4.69 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting SDP server Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting experimental netlink support Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to find Bluetooth netlink family Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to init netlink plugin Mar 11 22:13:00 tacone-macbook kernel: [ 158.284357] Bluetooth: L2CAP ver 2.14 Mar 11 22:13:00 tacone-macbook kernel: [ 158.284361] Bluetooth: L2CAP socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.446781] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 Mar 11 22:13:00 tacone-macbook kernel: [ 158.446784] Bluetooth: BNEP filters: protocol multicast Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 registered Mar 11 22:13:00 tacone-macbook kernel: [ 158.569481] Bluetooth: SCO (Voice Link) ver 0.6 Mar 11 22:13:00 tacone-macbook kernel: [ 158.569484] Bluetooth: SCO socket layer initialized Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 up Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting security manager 0 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: ioctl(HCIUNBLOCKADDR): Invalid argument (22) Mar 11 22:13:00 tacone-macbook kernel: [ 158.818600] Bluetooth: RFCOMM TTY layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818607] Bluetooth: RFCOMM socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818610] Bluetooth: RFCOMM ver 1.11 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Adapter /org/bluez/2242/hci0 has been enabled Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from ListDevices reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:13:00 tacone-macbook NetworkManager[1247]: <warn> bluez error getting adapter properties: Rejected send message, 1 matched rules; type="method_call", sender=":1.4" (uid=0 pid=1247 comm="NetworkManager) interface="org.bluez.Adapter" member="GetProperties" error name="(unset)" requested_reply=0 destination="org.bluez" (uid=0 pid=2242 comm="/usr/sbin/bluetoothd)) Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: return_link_keys (sba=00:23:6C:B5:03:6F, dba=00:23:6C:C0:F1:B0) Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:02 tacone-macbook bluetoothd[2243]: Discovery session 0x2262d7c0 with :1.45 activated Mar 11 22:15:15 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:15 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: io_capa_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:17 tacone-macbook bluetoothd[2243]: io_capa_response (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:18 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_notify (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE, type=5) Mar 11 22:15:28 tacone-macbook kernel: [ 306.585725] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.630757] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: Authentication requested Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:28 tacone-macbook kernel: [ 306.784829] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.857861] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: last message repeated 8 times Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:30 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0): probe requested by plugin 'Generic' Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:49 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:16:15 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:16:19 tacone-macbook kernel: [ 357.375108] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.169506] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.215529] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.281559] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.330588] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Any help ? PS: tethering via USB or WiFi is not an option, I need to do it over Bluetooth.

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • Correlated SQL Join Query from multiple tables

    - by SooDesuNe
    I have two tables like the ones below. I need to find what exchangeRate was in effect at the dateOfPurchase. I've tried some correlated sub queries, but I'm having difficulty getting the correlated record to be used in the sub queries. I expect a solution will need to follow this basic outline: SELECT only the exchangeRates for the applicable countryCode From 1. SELECT the newest exchangeRate less than the dateOfPurchase Fill in the query table with all the fields from 2. and the purchasesTable. My Tables: purchasesTable: > dateOfPurchase | costOfPurchase | countryOfPurchase > 29-March-2010 | 20.00 | EUR > 29-March-2010 | 3000 | JPN > 30-March-2010 | 50.00 | EUR > 30-March-2010 | 3000 | JPN > 30-March-2010 | 2000 | JPN > 31-March-2010 | 100.00 | EUR > 31-March-2010 | 125.00 | EUR > 31-March-2010 | 2000 | JPN > 31-March-2010 | 2400 | JPN costOfPurchase is in whatever the local currency is for a given countryCode exchangeRateTable > effectiveDate | countryCode | exchangeRate > 29-March-2010 | JPN | 90 > 29-March-2010 | EUR | 1.75 > 30-March-2010 | JPN | 92 > 31-March-2010 | JPN | 91 The results of the query that I'm looking for: > dateOfPurchase | costOfPurchase | countryOfPurchase | exchangeRate > 29-March-2010 | 20.00 | EUR | 1.75 > 29-March-2010 | 3000 | JPN | 90 > 30-March-2010 | 50.00 | EUR | 1.75 > 30-March-2010 | 3000 | JPN | 92 > 30-March-2010 | 2000 | JPN | 92 > 31-March-2010 | 100.00 | EUR | 1.75 > 31-March-2010 | 125.00 | EUR | 1.75 > 31-March-2010 | 2000 | JPN | 91 > 31-March-2010 | 2400 | JPN | 91 So for example in the results, the exchange rate, in effect for EUR on 31-March was 1.75. I'm using Access, but a MySQL answer would be fine too. UPDATE: Modification to Allan's answer: SELECT dateOfPurchase, costOfPurchase, countryOfPurchase, exchangeRate FROM purchasesTable p LEFT OUTER JOIN (SELECT e1.exchangeRate, e1.countryCode, e1.effectiveDate, min(e2.effectiveDate) AS enddate FROM exchangeRateTable e1 LEFT OUTER JOIN exchangeRateTable e2 ON e1.effectiveDate < e2.effectiveDate AND e1.countryCode = e2.countryCode GROUP BY e1.exchangeRate, e1.countryCode, e1.effectiveDate) e ON p.dateOfPurchase >= e.effectiveDate AND (p.dateOfPurchase < e.enddate OR e.enddate is null) AND p.countryOfPurchase = e.countryCode I had to make a couple small changes.

    Read the article

< Previous Page | 5 6 7 8 9 10  | Next Page >