Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 123/191 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Blank Screen at boot Ubuntu 12.04 - nvidia-current - Macbook Air 3,2

    - by soulnafein
    I've installed nvidia-current using the Additional Drivers application in Ubuntu 12.04. I need those drivers so I can use accelerated WebGL. After installing the drivers, and rebooting X fails to start and I have a frozen system/dark screen. Below is the content of Xorg.0.log How can I fix this problem? [ 4.666] X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 4.666] X Protocol Version 11, Revision 0 [ 4.666] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 4.666] Current Operating System: Linux david-macbook-air 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 4.666] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=b3d5ae2a-72af-4ef9-b775-0d40b5f80f9b ro quiet splash vt.handoff=7 [ 4.666] Build Date: 29 August 2012 12:12:33AM [ 4.666] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 4.666] Current version of pixman: 0.24.4 [ 4.666] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 4.666] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 4.666] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Dec 13 10:18:02 2012 [ 4.668] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 4.668] (==) No Layout section. Using the first Screen section. [ 4.668] (==) No screen section available. Using defaults. [ 4.668] (**) |-->Screen "Default Screen Section" (0) [ 4.668] (**) | |-->Monitor "<default monitor>" [ 4.668] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 4.668] (==) Automatically adding devices [ 4.668] (==) Automatically enabling devices [ 4.668] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 4.668] Entry deleted from font path. [ 4.668] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 4.668] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 4.669] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 4.669] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 4.669] (II) Loader magic: 0x7f6222467b00 [ 4.669] (II) Module ABI versions: [ 4.669] X.Org ANSI C Emulation: 0.4 [ 4.669] X.Org Video Driver: 11.0 [ 4.669] X.Org XInput driver : 16.0 [ 4.669] X.Org Server Extension : 6.0 [ 4.670] (--) PCI:*(0:2:0:0) 10de:08a3:106b:00d3 rev 162, Mem @ 0x92000000/16777216, 0x80000000/268435456, 0x90000000/33554432, I/O @ 0x00001000/128, BIOS @ 0x????????/131072 [ 4.670] (II) Open ACPI successful (/var/run/acpid.socket) [ 4.670] (II) LoadModule: "extmod" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 4.671] (II) Module extmod: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension MIT-SCREEN-SAVER [ 4.671] (II) Loading extension XFree86-VidModeExtension [ 4.671] (II) Loading extension XFree86-DGA [ 4.671] (II) Loading extension DPMS [ 4.671] (II) Loading extension XVideo [ 4.671] (II) Loading extension XVideo-MotionCompensation [ 4.671] (II) Loading extension X-Resource [ 4.671] (II) LoadModule: "dbe" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 4.671] (II) Module dbe: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension DOUBLE-BUFFER [ 4.671] (II) LoadModule: "glx" [ 4.671] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 4.869] (II) Module glx: vendor="NVIDIA Corporation" [ 4.869] compiled for 4.0.2, module version = 1.0.0 [ 4.869] Module class: X.Org Server Extension [ 4.869] (II) NVIDIA GLX Module 295.40 Thu Apr 5 21:57:38 PDT 2012 [ 4.869] (II) Loading extension GLX [ 4.869] (II) LoadModule: "record" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 4.870] (II) Module record: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.13.0 [ 4.870] Module class: X.Org Server Extension [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension RECORD [ 4.870] (II) LoadModule: "dri" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 4.870] (II) Module dri: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.0.0 [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension XFree86-DRI [ 4.870] (II) LoadModule: "dri2" [ 4.871] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 4.871] (II) Module dri2: vendor="X.Org Foundation" [ 4.871] compiled for 1.11.3, module version = 1.2.0 [ 4.871] ABI class: X.Org Server Extension, version 6.0 [ 4.871] (II) Loading extension DRI2 [ 4.871] (==) Matched nvidia as autoconfigured driver 0 [ 4.871] (==) Matched nouveau as autoconfigured driver 1 [ 4.871] (==) Matched nv as autoconfigured driver 2 [ 4.871] (==) Matched vesa as autoconfigured driver 3 [ 4.871] (==) Matched fbdev as autoconfigured driver 4 [ 4.871] (==) Assigned the driver to the xf86ConfigLayout [ 4.871] (II) LoadModule: "nvidia" [ 4.871] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.887] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.887] compiled for 4.0.2, module version = 1.0.0 [ 4.887] Module class: X.Org Video Driver [ 4.892] (II) LoadModule: "nouveau" [ 4.894] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.894] (II) Module nouveau: vendor="X.Org Foundation" [ 4.894] compiled for 1.11.3, module version = 1.0.2 [ 4.894] Module class: X.Org Video Driver [ 4.894] ABI class: X.Org Video Driver, version 11.0 [ 4.894] (II) LoadModule: "nv" [ 4.895] (WW) Warning, couldn't open module nv [ 4.895] (II) UnloadModule: "nv" [ 4.895] (II) Unloading nv [ 4.895] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.895] (II) LoadModule: "vesa" [ 4.895] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.896] (II) Module vesa: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 2.3.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (II) LoadModule: "fbdev" [ 4.896] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.896] (II) Module fbdev: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 0.4.2 [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (==) Matched nvidia as autoconfigured driver 0 [ 4.896] (==) Matched nouveau as autoconfigured driver 1 [ 4.896] (==) Matched nv as autoconfigured driver 2 [ 4.896] (==) Matched vesa as autoconfigured driver 3 [ 4.896] (==) Matched fbdev as autoconfigured driver 4 [ 4.896] (==) Assigned the driver to the xf86ConfigLayout [ 4.896] (II) LoadModule: "nvidia" [ 4.896] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.896] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.896] compiled for 4.0.2, module version = 1.0.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] (II) UnloadModule: "nvidia" [ 4.896] (II) Unloading nvidia [ 4.896] (II) Failed to load module "nvidia" (already loaded, 32610) [ 4.896] (II) LoadModule: "nouveau" [ 4.897] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.897] (II) Module nouveau: vendor="X.Org Foundation" [ 4.897] compiled for 1.11.3, module version = 1.0.2 [ 4.897] Module class: X.Org Video Driver [ 4.897] ABI class: X.Org Video Driver, version 11.0 [ 4.897] (II) UnloadModule: "nouveau" [ 4.897] (II) Unloading nouveau [ 4.897] (II) Failed to load module "nouveau" (already loaded, 32610) [ 4.897] (II) LoadModule: "nv" [ 4.897] (WW) Warning, couldn't open module nv [ 4.897] (II) UnloadModule: "nv" [ 4.897] (II) Unloading nv [ 4.897] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.897] (II) LoadModule: "vesa" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.898] (II) Module vesa: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 2.3.0 [ 4.898] Module class: X.Org Video Driver [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "vesa" [ 4.898] (II) Unloading vesa [ 4.898] (II) Failed to load module "vesa" (already loaded, 0) [ 4.898] (II) LoadModule: "fbdev" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.898] (II) Module fbdev: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 0.4.2 [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "fbdev" [ 4.898] (II) Unloading fbdev [ 4.899] (II) Failed to load module "fbdev" (already loaded, 0) [ 4.899] (II) NVIDIA dlloader X Driver 295.40 Thu Apr 5 21:38:35 PDT 2012 [ 4.899] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 4.899] (II) NOUVEAU driver Date: Wed Sep 12 13:42:43 2012 +0200 [ 4.899] (II) NOUVEAU driver for NVIDIA chipset families : [ 4.899] RIVA TNT (NV04) [ 4.899] RIVA TNT2 (NV05) [ 4.899] GeForce 256 (NV10) [ 4.899] GeForce 2 (NV11, NV15) [ 4.899] GeForce 4MX (NV17, NV18) [ 4.899] GeForce 3 (NV20) [ 4.900] GeForce 4Ti (NV25, NV28) [ 4.900] GeForce FX (NV3x) [ 4.900] GeForce 6 (NV4x) [ 4.900] GeForce 7 (G7x) [ 4.900] GeForce 8 (G8x) [ 4.900] GeForce GTX 200 (NVA0) [ 4.900] GeForce GTX 400 (NVC0) [ 4.900] (II) VESA: driver for VESA chipsets: vesa [ 4.900] (II) FBDEV: driver for framebuffer: fbdev [ 4.900] (++) using VT number 7 [ 4.902] (II) Loading sub module "fb" [ 4.902] (II) LoadModule: "fb" [ 4.902] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.902] (II) Module fb: vendor="X.Org Foundation" [ 4.902] compiled for 1.11.3, module version = 1.0.0 [ 4.902] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.902] (II) Loading sub module "wfb" [ 4.902] (II) LoadModule: "wfb" [ 4.903] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.905] (II) Module wfb: vendor="X.Org Foundation" [ 4.905] compiled for 1.11.3, module version = 1.0.0 [ 4.905] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.905] (II) Loading sub module "ramdac" [ 4.905] (II) LoadModule: "ramdac" [ 4.905] (II) Module "ramdac" already built-in [ 4.907] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.912] (WW) Falling back to old probe method for vesa [ 4.912] (WW) Falling back to old probe method for fbdev [ 4.912] (II) Loading sub module "fbdevhw" [ 4.912] (II) LoadModule: "fbdevhw" [ 4.912] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 4.912] (II) Module fbdevhw: vendor="X.Org Foundation" [ 4.912] compiled for 1.11.3, module version = 0.0.2 [ 4.912] ABI class: X.Org Video Driver, version 11.0 [ 4.912] (II) NVIDIA(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 4.912] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 [ 4.912] (==) NVIDIA(0): RGB weight 888 [ 4.912] (==) NVIDIA(0): Default visual is TrueColor [ 4.912] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 4.912] (**) NVIDIA(0): Enabling 2D acceleration [ 5.442] (EE) NVIDIA(0): Failed to initialize the display subsystem for the NVIDIA [ 5.442] (EE) NVIDIA(0): graphics device! [ 5.442] (EE) NVIDIA(0): Failed to get supported display device(s) [ 5.442] (EE) NVIDIA(0): Failed to initialize dac HAL [ 5.442] (II) UnloadModule: "nvidia" [ 5.442] (II) Unloading nvidia [ 5.442] (II) UnloadModule: "wfb" [ 5.442] (II) Unloading wfb [ 5.442] (II) UnloadModule: "fb" [ 5.443] (II) Unloading fb [ 5.443] (EE) Screen(s) found, but none have a usable configuration. [ 5.443] Fatal server error: [ 5.443] no screens found [ 5.443] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 5.443] Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 5.443] [ 5.447] ddxSigGiveUp: Closing log [ 5.447] Server terminated with error (1). Closing log file.

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • E-Business Suite - Cloning Basics & AMP Cloning - US

    - by Annemarie Provisero
    ADVISOR WEBCAST: E-Business Suite - Cloning Basics & AMP Cloning - US PRODUCT FAMILY: EBS – ATG - Utilities July 20, 2011 at 17:00 UK / 18:00 CET / 09:00 am Pacific / 10:00 am Mountain / 12:00 Eastern This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Cloning functionality available in the E-Business Suite Release. We are going to talk about the generic Cloning options and will then go into depth about the cloning scenario when using AMP (Applications Management Pack) within the Enterprise Manager. TOPICS WILL INCLUDE: Cloning Overview Rapidclone steps in Details Rapidclone limitations EM Grid Setup with AMP for Cloning Advantages of Cloning with AMP Cloning Procedures available with AMP Monitoring Clone Operation Few things to remember before Cloning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Exadata X3 Sales/Pre-Sales/Support Resell Rights Enablement Day

    - by mseika
    Dear Partner, Avnet and Oracle would like to invite you to a FREE Exadata X3 Sales/Pre-Sales/Support Resell Rights Enablement Day which is taking place on Wednesday 16th January 2013 at the Oracle London City offices in Moorgate. We will give you the opportunity to get an in depth understanding of how hardware and software is engineered to work together to create the power and scalability of Exadata. The session will focus on Oracle Exadata fundamentals, features, components and capabilities. The event will be a day long and will give you the opportunity to put any questions to the presenters that will help you understand how to spot an Exadata opportunity and position Exadata in an opportunity.   Register Now Register now or call our Hotline on 01925 856999. When Wednesday 16th January 2013 Duration: 9.30am to 17.00pm Where Oracle Corporation UK Ltd. One South PlaceLondon EC2M 2RB. For directionsplease see thelocation map.   Cost No charge Contact Us Avnet Technology Solutions LimitedClarity House103 Dalton AvenueBirchwood ParkWarringtonWA3 6YBUKT: 01925 856900 F: 01925 856901 E: [email protected] Or find us online:Avnet websiteLinkedInTwitterFacebook

    Read the article

  • nvidia segfault crashes XServer (12.04 fresh install)

    - by Sébastien GILLES
    I am on a fresh install of 12.04, and my video card is a Quadro FX 570, which appears to be supported (see output of unity_support_test below). But my Xserver crashes randomly and causes my session to just shutdown and bring me back to the logging screen. This happens several times per hour. After checking syslog it that a segfault in the nvidia driver is the cause of the problem: Jun 13 11:14:17 lima kernel: [86569.828982] Xorg[17940]: segfault at ffebeb69 ip b4c8f7aa sp bf8076dc error 4 in nvidia_drv.so[b47a7000+706000] Jun 13 11:14:36 lima gnome-session[18119]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Ressource temporairement non disponible) on X server :3.#012 The funny thing is that as I was reporting this bug, my xserver crashed again (!) and this time I got another error message: Jun 13 11:29:39 lima kernel: [87491.106441] NVRM: Xid (0000:02:00): 6, PE0001 Jun 13 11:29:39 lima gnome-session[26493]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Ressource temporairement non disponible) on X server :4.#012 Any idea as to what can be the problem given that i am on a fresh install with a supported video card ? Sébastien == Output of unity_support_test $ /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: Quadro FX 570/PCIe/SSE2 OpenGL version string: 3.3.0 NVIDIA 295.40 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Failing to upgrade to linux-image-3.0.0-26-generic

    - by Dan Lee
    When I try to upgrade linux-image-3.0.0-26-generic I get following problems: dpkg-deb (subprocess): data: internal bzip2 read error: 'DATA_ERROR' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/linux-image-3.0.0-26-generic_3.0.0-26.42_amd64.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./lib/modules/3.0.0-26-generic/kernel/drivers/scsi/fnic/fnic.ko' No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.0.0-26-generic /boot/vmlinuz-3.0.0-26-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.0.0-26-generic /boot/vmlinuz-3.0.0-26-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.0.0-26-generic_3.0.0-26.42_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.0.0-26-generic; however: Package linux-image-3.0.0-26-generic is not installed. I don't know why this happens to me; earlier upgrades always worked without problems. Does anybody know how to fix this?

    Read the article

  • Should Professional Development occur on company time?

    - by jshu
    As a first-time part-time software developer at a small consulting company, I'm struggling to organise time to further my own software development knowledge - whether that's reading a book, keeping up with the popular questions on StackOverflow, researching a technology we're using in-depth, or following the front page of Hacker News. I can see results borne from my self-allocated study time, but listing and demonstrating the skills and knowledge gained through Professional Development is difficult. The company does not have any defined PD policy, and there's a lot of pressure to get something deliverable done now! when working for consultants. I've checked what my coworkers do, and they don't appear to allocate any time to self-improvement; they just work at the problems they're given, looking up specific MSDN references, code samples, and the like as they need them. I realise that PD policy is going to vary across companies of different size and culture, and a company like my own is probably a bit of an edge case. I'd love to hear views and experiences from more seasoned developers; especially those who have to make the PD policy choices in their team or company. I'd also like to learn about the more radical approaches to PD, even if they're completely out there; it's always interesting to see what other people are trying. Not quite a summary, but what I'm trying to ask: Is it common or recommended for companies to allocate PD time? Whose responsibility is it to ensure a developer's knowledge and skills are up to date? Should a part-time work schedule inspire a lower ratio of PD time : work? How can a developer show non-developer coworkers that reading blogs and books is net productive? Is reading blogs and books actually net productive? (references welcomed) Is writing blogs effective as a way of PD? (a recent theme on Hacker News) This is sort of a broad question because I don't know exactly which questions I need to ask here, so any thoughts on relevant issues I haven't addressed are very welcome.

    Read the article

  • Positioning a sprite in XNA: Use ClientBounds or BackBuffer?

    - by Martin Andersson
    I'm reading a book called "Learning XNA 4.0" written by Aaron Reed. Throughout most of the chapters, whenever he calculates the position of a sprite to use in his call to SpriteBatch.Draw, he uses Window.ClientBounds.Width and Window.ClientBounds.Height. But then all of a sudden, on page 108, he uses PresentationParameters.BackBufferWidth and PresentationParameters.BackBufferHeight instead. I think I understand what the Back Buffer and the Client Bounds are and the difference between those two (or perhaps not?). But I'm mighty confused about when I should use one or the other when it comes to positioning sprites. The author uses for the most part Client Bounds both for checking whenever a moving sprite is of the screen and to find a spawn point for new sprites. However, he seems to make two exceptions from this pattern in his book. The first time is when he wants some animated sprites to "move in" and cross the screen from one side to another (page 108 as mentioned). The second and last time is when he positions a texture to work as a button in the lower right corner of a Windows Phone 7 screen (page 379). Anyone got an idea? I shall provide some context if it is of any help. Here's how he usually calls SpriteBatch.Draw (code example from where he positions a sprite in the middle of the screen [page 35]): spriteBatch.Draw(texture, new Vector2( (Window.ClientBounds.Width / 2) - (texture.Width / 2), (Window.ClientBounds.Height / 2) - (texture.Height / 2)), null, Color.White, 0, Vector2.Zero, 1, SpriteEffects.None, 0); And here is the first case of four possible in a switch statement that will set the position of soon to be spawned moving sprites, this position will later be used in the SpriteBatch.Draw call (page 108): // Randomly choose which side of the screen to place enemy, // then randomly create a position along that side of the screen // and randomly choose a speed for the enemy switch (((Game1)Game).rnd.Next(4)) { case 0: // LEFT to RIGHT position = new Vector2( -frameSize.X, ((Game1)Game).rnd.Next(0, Game.GraphicsDevice.PresentationParameters.BackBufferHeight - frameSize.Y)); speed = new Vector2(((Game1)Game).rnd.Next( enemyMinSpeed, enemyMaxSpeed), 0); break;

    Read the article

  • Interpreting Others' Source Code

    - by Maxpm
    Note: I am aware of this question. This question is a bit more specific and in-depth, however, focusing on reading the actual code rather than debugging it or asking the author. As a student in an introductory-level computer science class, my friends occasionally ask me to help them with their assignments. Programming is something I'm very proud of, so I'm always happy to oblige. However, I usually have difficulty interpreting their source code. Sometimes this is due to a strange or inconsistent style, sometimes it's due to strange design requirements specified in the assignment, and sometimes it's just due to my stupidity. In any case, I end up looking like an idiot staring at the screen for several minutes saying "Uh..." I usually check for the common errors first - missing semicolons or parentheses, using commas instead of extractor operators, etc. The trouble comes when that fails. I often can't step through with a debugger because it's a syntax error, and I often can't ask the author because he/she him/herself doesn't understand the design decisions. How do you typically read the source code of others? Do you read through the code from top-down, or do you follow each function as it's called? How do you know when to say "It's time to refactor?"

    Read the article

  • double screen in ubuntu 12.04?

    - by johan
    I am using ubuntu 12.04 and my video card is ATI Radeon 5000. I cannot use double screen (extended version). I get this error The selected configuration for displays could not be applied requested position/size for CRTC 148 is outside the allowed limit: position=(1280, 0), size=(1280, 768), maximum=(1440, 1440) I tried all display settings but it does not work. Some outputs from the system settings: root@ubuntu:~# lshw -C display *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff root@ubuntu:~# aticonfig --initial Uninitialised file found, configuring. Using /etc/X11/xorg.conf Saving back-up to /etc/X11/xorg.conf.original-0 root@ubuntu:~# cat /etc/X11/xorg.conf Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I would appreciate any suggestions how to solve the problem. Thank you

    Read the article

  • Youtube video streaming slow

    - by Newbie
    When I try to stream youtube videos on my ubuntu 11.04, they don't stream smoothly. They buffer well and are choppy. Here's my Config: Laptop: Gateway NV58 Ram : 4 GB Ethernet Controller: Broadcom Corp NetLink BCM5784M Gigabit Ethernet PCIe (rev10) Let me know if you need more details. Output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 02:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 04:00.0 Network controller: Intel Corporation WiFi Link 5100

    Read the article

  • Fuzzing for Security

    - by Sylvain Duloutre
    Yesterday, I attended an internal workshop about ethical hacking. Hacking skills like fuzzing can be used to quantitatively assess and measure security threats in software.  Fuzzing is a software testing technique used to discover coding errors and security loopholes in software, operating systems or networks by injecting massive amounts of random data, called fuzz, to the system in an attempt to make it crash. If the program contains a vulnerability that can leads to an exception, crash or server error (in the case of web apps), it can be determined that a vulnerability has been discovered.A fuzzer is a program that generates and injects random (and in general faulty) input to an application. Its main purpose is to make things easier and automated.There are typically two methods for producing fuzz data that is sent to a target, Generation or Mutation. Generational fuzzers are capable of building the data being sent based on a data model provided by the fuzzer creator. Sometimes this is simple and dumb as sending random bytes, swapping bytes or much smarter by knowing good values and combining them in interesting ways.Mutation on the other hand starts out with a known good "template" which is then modified. However, nothing that is not present in the "template" or "seed" will be produced.Generally fuzzers are good at finding buffer overflow, DoS, SQL Injection, Format String bugs etc. They do a poor job at finding vulnerabilites related to information disclosure, encryption flaws and any other vulnerability that does not cause the program to crash.  Fuzzing is simple and offers a high benefit-to-cost ratio but does not replace other proven testing techniques.What is your computer doing over the week-end ?

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • XNA calculate normals for linesegment

    - by Gerhman
    I am quite new to 3D graphical programming and thus far only understand that normal somehow define the direction in which a vertex faces and therefore the direction in which light is reflected. I have now idea how they are calculated though, only that they are defined by a Vector3. For a visualizer that I am creating I am importing a bunch of coordinate which represent layer upon layer of line segments. At the moment I am only using a vertex buffer and adding the start and end point of each line and then rendering a linelist. The thing is now that I need to calculate the normal for the vertices of these line segments so that I can get some realistic lighting. I have no idea how to calculate these normal but I know they all face sideways and not up or down. To calculate them all I have are the start and end positions of each line segment. The below image is a representation of what I think I need to do in the case of an example layer: The red arrows represent the normal that should be calculates, the blue text represent the coordinates of the vertices and the green numbers represent their indices. I would greatly appreciate it if someone could please explain to me how I should calculate these normal.

    Read the article

  • Unity isn't starting on 13.10 (with Cinnamon 2.0 installed)

    - by Sam Pearman
    Since upgrading to 13.10, I can't log in to unity desktop. Light dm works correctly, but attempting to log in tries to start the session then drops back to light. I've already dropped to terminal (ctrl+alt+f2) and done this: sudo apt-get update sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity Logging in as a guest session also fails. Logging in to other window managers works with varying degrees of success. Note: I have Cinnamon 2.0 installed from PPA. I'm using a 2 monitor setup. Also of note is that the session prior to my upgrade to 13.10 the background of unity failed to display at all, instead showing what was there in the screen buffer from the previous frame. The entire OS worked correctly otherwise though, so I just ignored it for the session. No other upgrades or even updates were done prior to this occurring. My upgrade path to 13.10 was basically this: Install 13.04 alongside Windows 7, use ubuntu as a glorified web browser for a while, get updates (in preparation for 13.10), install 13.10. I also used Unity Tweak Tool to change some aspects of unity, particularly auto-hide. Any help or ideas would be appreciated, as I'm typing this on my phone :(

    Read the article

  • Slow boot on Ubuntu 12.04, probable cause the network connection

    - by Ravi S Ghosh
    I have been having rather slow boot on Ubuntu 12.04. Lately, I tried to figure out the reason and it seems to be the network connection which does not get connected and requires multiple attempts. Here is part of dmesg [ 2.174349] EXT4-fs (sda2): INFO: recovery required on readonly filesystem [ 2.174352] EXT4-fs (sda2): write access will be enabled during recovery [ 2.308172] firewire_core: created device fw0: GUID 384fc00005198d58, S400 [ 2.333457] usb 7-1.2: new low-speed USB device number 3 using uhci_hcd [ 2.465896] EXT4-fs (sda2): recovery complete [ 2.466406] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 2.589440] usb 7-1.3: new low-speed USB device number 4 using uhci_hcd **[ 18.292029] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 18.458958] udevd[377]: starting version 175 [ 18.639482] Adding 4200960k swap on /dev/sda5. Priority:-1 extents:1 across:4200960k [ 19.314127] wmi: Mapper loaded [ 19.426602] r592 0000:09:01.2: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.426739] r592: driver successfully loaded [ 19.460105] input: Dell WMI hotkeys as /devices/virtual/input/input5 [ 19.493629] lp: driver loaded but no devices found [ 19.497012] cfg80211: Calling CRDA to update world regulatory domain [ 19.535523] ACPI Warning: _BQC returned an invalid level (20110623/video-480) [ 19.539457] acpi device:03: registered as cooling_device2 [ 19.539520] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/device:01/LNXVIDEO:00/input/input6 [ 19.539568] ACPI: Video Device [M86] (multi-head: yes rom: no post: no) [ 19.578060] Linux video capture interface: v2.00 [ 19.667708] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [ 19.763171] r852 0000:09:01.3: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.763258] r852: driver loaded successfully [ 19.854769] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.0/input/input7 [ 19.854864] generic-usb 0003:045E:00DD.0001: input,hidraw0: USB HID v1.11 Keyboard [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input0 [ 19.878605] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.1/input/input8 [ 19.878698] generic-usb 0003:045E:00DD.0002: input,hidraw1: USB HID v1.11 Device [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input1 [ 19.902779] input: DELL DELL USB Laser Mouse as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.3/7-1.3:1.0/input/input9 [ 19.925034] generic-usb 0003:046D:C063.0003: input,hidraw2: USB HID v1.10 Mouse [DELL DELL USB Laser Mouse] on usb-0000:00:1d.1-1.3/input0 [ 19.925057] usbcore: registered new interface driver usbhid [ 19.925059] usbhid: USB HID core driver [ 19.942362] uvcvideo: Found UVC 1.00 device Laptop_Integrated_Webcam_2M (0c45:63ea) [ 19.947004] input: Laptop_Integrated_Webcam_2M as /devices/pci0000:00/0000:00:1a.7/usb1/1-6/1-6:1.0/input/input10 [ 19.947075] usbcore: registered new interface driver uvcvideo [ 19.947077] USB Video Class driver (1.1.1) [ 20.145232] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 20.145235] Copyright(c) 2003-2011 Intel Corporation [ 20.145327] iwlwifi 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 20.145357] iwlwifi 0000:04:00.0: setting latency timer to 64 [ 20.145402] iwlwifi 0000:04:00.0: pci_resource_len = 0x00002000 [ 20.145404] iwlwifi 0000:04:00.0: pci_resource_base = ffffc90000674000 [ 20.145407] iwlwifi 0000:04:00.0: HW Revision ID = 0x0 [ 20.145531] iwlwifi 0000:04:00.0: irq 46 for MSI/MSI-X [ 20.145613] iwlwifi 0000:04:00.0: Detected Intel(R) WiFi Link 5100 AGN, REV=0x54 [ 20.145720] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 20.167535] iwlwifi 0000:04:00.0: device EEPROM VER=0x11f, CALIB=0x4 [ 20.167538] iwlwifi 0000:04:00.0: Device SKU: 0Xf0 [ 20.167567] iwlwifi 0000:04:00.0: Tunable channels: 13 802.11bg, 24 802.11a channels [ 20.172779] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [ 20.172783] Disabling lock debugging due to kernel taint [ 20.250115] [fglrx] Maximum main memory to use for locked dma buffers: 3759 MBytes. [ 20.250567] [fglrx] vendor: 1002 device: 9553 count: 1 [ 20.251256] [fglrx] ioport: bar 1, base 0x2000, size: 0x100 [ 20.251271] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 20.251277] pci 0000:01:00.0: setting latency timer to 64 [ 20.251559] [fglrx] Kernel PAT support is enabled [ 20.251578] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [ 20.310385] iwlwifi 0000:04:00.0: loaded firmware version 8.83.5.1 build 33692 [ 20.310598] Registered led device: phy0-led [ 20.310628] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.372306] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [ 20.411015] psmouse serio1: synaptics: Touchpad model: 1, fw: 7.2, id: 0x1c0b1, caps: 0xd04733/0xa40000/0xa0000 [ 20.454232] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input11 [ 20.545636] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.545640] cfg80211: World regulatory domain updated: [ 20.545642] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 20.545644] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545647] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545649] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545652] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545654] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.609484] type=1400 audit(1340502633.160:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=693 comm="apparmor_parser" [ 20.609494] type=1400 audit(1340502633.160:3): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=642 comm="apparmor_parser" [ 20.609843] type=1400 audit(1340502633.160:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=693 comm="apparmor_parser" [ 20.609852] type=1400 audit(1340502633.160:5): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=642 comm="apparmor_parser" [ 20.610047] type=1400 audit(1340502633.160:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=693 comm="apparmor_parser" [ 20.610060] type=1400 audit(1340502633.160:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=642 comm="apparmor_parser" [ 20.610476] type=1400 audit(1340502633.160:8): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=814 comm="apparmor_parser" [ 20.610829] type=1400 audit(1340502633.160:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=814 comm="apparmor_parser" [ 20.611035] type=1400 audit(1340502633.160:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=814 comm="apparmor_parser" [ 20.661912] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 [ 20.661982] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [ 20.662013] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 20.770289] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 20.770689] snd_hda_intel 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 20.770786] snd_hda_intel 0000:01:00.1: irq 48 for MSI/MSI-X [ 20.770815] snd_hda_intel 0000:01:00.1: setting latency timer to 64 [ 20.994040] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [ 20.994189] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input13 [ 21.554799] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [ 21.554802] vesafb: scrolling: redraw [ 21.554804] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [ 21.557342] vesafb: framebuffer at 0xd0000000, mapped to 0xffffc90011800000, using 3072k, total 3072k [ 21.557498] Console: switching to colour frame buffer device 128x48 [ 21.557516] fb0: VESA VGA frame buffer device [ 21.987338] EXT4-fs (sda2): re-mounted. Opts: errors=remount-ro [ 22.184693] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [ 27.362440] iwlwifi 0000:04:00.0: RF_KILL bit toggled to disable radio. [ 27.436988] init: failsafe main process (986) killed by TERM signal [ 27.970112] ppdev: user-space parallel port driver [ 28.198917] Bluetooth: Core ver 2.16 [ 28.198935] NET: Registered protocol family 31 [ 28.198937] Bluetooth: HCI device and connection manager initialized [ 28.198940] Bluetooth: HCI socket layer initialized [ 28.198941] Bluetooth: L2CAP socket layer initialized [ 28.198947] Bluetooth: SCO socket layer initialized [ 28.226135] Bluetooth: RFCOMM TTY layer initialized [ 28.226141] Bluetooth: RFCOMM socket layer initialized [ 28.226143] Bluetooth: RFCOMM ver 1.11 [ 28.445620] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 28.445623] Bluetooth: BNEP filters: protocol multicast [ 28.524578] type=1400 audit(1340502641.076:11): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=1052 comm="apparmor_parser" [ 28.525018] type=1400 audit(1340502641.076:12): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=1052 comm="apparmor_parser" [ 28.629957] type=1400 audit(1340502641.180:13): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1105 comm="apparmor_parser" [ 28.630325] type=1400 audit(1340502641.180:14): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1105 comm="apparmor_parser" [ 28.630535] type=1400 audit(1340502641.180:15): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1105 comm="apparmor_parser" [ 28.645266] type=1400 audit(1340502641.196:16): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1104 comm="apparmor_parser" **[ 28.751922] ADDRCONF(NETDEV_UP): wlan0: link is not ready** [ 28.753653] tg3 0000:08:00.0: irq 49 for MSI/MSI-X **[ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 28.857034] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 28.871080] type=1400 audit(1340502641.420:17): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1108 comm="apparmor_parser" [ 28.871519] type=1400 audit(1340502641.420:18): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1108 comm="apparmor_parser" [ 28.874905] type=1400 audit(1340502641.424:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1113 comm="apparmor_parser" [ 28.875354] type=1400 audit(1340502641.424:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1113 comm="apparmor_parser" [ 30.477976] tg3 0000:08:00.0: eth0: Link is up at 100 Mbps, full duplex [ 30.477979] tg3 0000:08:00.0: eth0: Flow control is on for TX and on for RX **[ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready** [ 31.110269] fglrx_pci 0000:01:00.0: irq 50 for MSI/MSI-X [ 31.110859] [fglrx] Firegl kernel thread PID: 1327 [ 31.111021] [fglrx] Firegl kernel thread PID: 1329 [ 31.111408] [fglrx] Firegl kernel thread PID: 1330 [ 31.111543] [fglrx] IRQ 50 Enabled [ 31.712938] [fglrx] Gart USWC size:1224 M. [ 31.712941] [fglrx] Gart cacheable size:486 M. [ 31.712945] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [ 31.712948] [fglrx] Reserved FB block: Unshared offset:fc2b000, size:3d5000 [ 31.712950] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [ 41.312020] eth0: no IPv6 routers present As you can see I get multiple instances of [ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready and then finally it becomes read and I get the message [ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready. I searched askubuntun, ubuntuforum, and the web but couldn't find a solution. Any help would be very much appreciated. Here is the bootchart

    Read the article

  • Is QtQuick.Controls available on Ubuntu 13.10

    - by javascript is future
    I was looking to do UI development in QML, and I really want it to look native. I found the QtQuick.Controls (http://qt-project.org/doc/qt-5.1/qtquickcontrols/qtquickcontrols-index.html), but when I try make a simple application, it tells me that QtQuick.Controls isn't installed. main.qml: import QtQuick 2.1 import QtQuick.Controls 1.0 Rectangle { height: 200 width: 200 } terminal: $ qmlscene main.qml file:///tmp/main.qml:2 module "QtQuick.Controls" is not installed Also, I downloaded the source from https://qt.gitorious.org/qt/qtquickcontrols/source/stable, ran qmake && make, but this returned the following output: cd src/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/src.pro -o Makefile ) && make -f Makefile make[1]: Går til katalog '/tmp/qtquickcontrols/src' cd controls/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/controls/controls.pro -o Makefile ) && make -f Makefile make[2]: Går til katalog '/tmp/qtquickcontrols/src/controls' g++ -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -O2 -fvisibility=hidden -fvisibility-inlines-hidden -std=c++0x -fno-exceptions -Wall -W -D_REENTRANT -fPIC -DQT_NO_XKB -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_PLUGIN -DQT_QUICK_LIB -DQT_QML_LIB -DQT_WIDGETS_LIB -DQT_NETWORK_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/share/qt5/mkspecs/linux-g++ -I. -I/usr/include/qt5 -I/usr/include/qt5/QtQuick -I/usr/include/qt5/QtQml -I/usr/include/qt5/QtWidgets -I/usr/include/qt5/QtNetwork -I/usr/include/qt5/QtGui -I/usr/include/qt5/QtGui/5.1.1 -I/usr/include/qt5/QtGui/5.1.1/QtGui -I/usr/include/qt5/QtCore -I/usr/include/qt5/QtCore/5.1.1 -I/usr/include/qt5/QtCore/5.1.1/QtCore -I.moc/release-shared -o .obj/release-shared/qquickaction.o qquickaction.cpp qquickaction.cpp:49:39: fatal error: private/qguiapplication_p.h: No such file or directory #include <private/qguiapplication_p.h> ^ Is there some PPA I could use, or do I have to wait for Trusty to get out, before I can use native controls from Qt? Regards

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Do you know about the Visual Studio 2010 Architecture Guidance?

    - by Martin Hinshelwood
    If you have not seen the Visual Studio 2010 Architectural Guidance from the Visual Studio ALM Rangers then you are missing out. I have been spelunking the TFS Guidance recently and I discovered the Visual Studio 2010 Architectural Guidance. This is not an in-depth look at the capabilities of the architectural tools that shipped with Visual Studio 2010 Ultimate, but is instead a set of samples that lead you by example through real world scenarios. There is practical guidance and checklists to help guide lead developers and architects through the common challenges in understanding both existing and new applications. The content concentrates on practical guidance for Visual Studio 2010 Ultimate and is focused on modelling tools. There is integration into Visual Studio so all you need to do to access it is select “Architecture | Visual Studio ALM Rangers – Architecture Guidance”. Figure: Accessing the Architecture guidance is easy This brings up an inline version of the documentation and a kind of Explorer that lets you pick the tasks you want to perform and takes you strait to that part of the Guidance. Figure: Access the Guidance from right within Visual Studio 2010 This is a big help when you just want to figure out how to do something and can’t be bothered searching for and through the content in the provided Word documents. The Question and Answer section is full of useful content and there are six Hands-On-Labs to sink your teeth into: Creating extensions with the feature extension Explore an Existing System Scenario Extensibility Layer Diagrams New Solution Scenario Reusable Architecture Scenario Validation an Architecture Scenario I’m sold! Where can i get my hands on this fantastic content? Download the Visual Studio 2010 Architecture Tooling Guidance and if you like it don’t forget to add a review to make the team that put it together in their spare time feel all the mere loved.

    Read the article

  • Nvidia dual monitor configuration gets lost every time I reboot

    - by sunwukung
    I've recently updated (well, borked then completely reinstalled) to 12.04. I'm running a dual monitor setup, with a Dell U2410 / Dell 2007WFP combination on an HP Elite Book 8560W. The graphics card is an NVIDIA GF108 [Quadro 1000M]. My problem is as follows. I can get the dual monitor setup working fine, but every time I reboot, my machine appears to lose the settings (specifically, the U2410 is disabled, the mouse pointer is locked in the launcher). I have to restate the settings after every launch. I've tried running nvidia-settings as sudo, I've save the changes to my xorg.conf file (see below) but nothing seems to be sticking. Has anyone had similair issues, or know of a fix? Conf file follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 2007WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro 1000M" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "CRT: 1680x1050 +1920+0, DFP-1: 1920x1200 +0+0; CRT: nvidia-auto-select +0+0, DFP-1: NULL" SubSection "Display" Depth 24 EndSubSection EndSection The error message I'm getting is this: none of the selected modes were compatible with the possible modes: Trying modes for CRTC 642: CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1)

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >