Search Results

Search found 3546 results on 142 pages for 'dos batch'.

Page 133/142 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Kubuntu 12.04 - Touchpad and keyboard stopped working at random

    - by StepTNT
    As in the title, I've got this problem with my Kubuntu 12.04. At first I've thought that the whole system was hung, but it happened again 5 minutes ago and, while the keyboard and the touchpad stopped working, the music was still playing, so I guess that's just an "input" problem, because the system was still working! Any solution? Is there some data that you need to know about my setup? EDIT: Added my lshw outout description: Notebook product: N53SV () vendor: ASUSTeK Computer Inc. version: 1.0 serial: B2N0AS17695408A width: 64 bits capabilities: smbios-2.6 dmi-2.6 vsyscall32 configuration: boot=normal chassis=notebook family=N uuid=8083F2DA-A43E-E081-3F3F-BCAEC55F8AA1 *-core description: Motherboard product: N53SV vendor: ASUSTeK Computer Inc. physical id: 0 version: 1.0 serial: BSN12345678901234567 slot: MIDDLE *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: N53SV.214 date: 08/10/2011 size: 64KiB capacity: 2496KiB capabilities: pci upgrade shadowing cdboot bootselect edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer acpi usb smartbattery biosbootspecification *-cpu description: CPU product: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz serial: To Be Filled By O.E.M. slot: CPU 1 size: 800MHz capacity: 4GHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx lahf_lm ida arat epb xsaveopt pln pts tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=4 enabledcores=1 threads=2 *-cache description: L1 cache physical id: 5 slot: L1-Cache size: 32KiB capacity: 32KiB capabilities: internal write-back instruction *-memory description: System Memory physical id: 40 slot: System board or motherboard size: 10GiB *-bank:0 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: 99U5428-040.A00LF vendor: Kingston physical id: 0 serial: 103C28C3 slot: ChannelA-DIMM0 size: 4GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 1 serial: 58383D1F slot: ChannelA-DIMM1 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:2 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 2 serial: 58183D19 slot: ChannelB-DIMM0 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:3 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 3 serial: 58183C8F slot: ChannelB-DIMM1 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-pci description: Host bridge product: 2nd Generation Core Processor Family DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 09 width: 32 bits clock: 33MHz configuration: driver=agpgart-intel resources: irq:0 *-pci:0 description: PCI bridge product: Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port vendor: Intel Corporation physical id: 1 bus info: pci@0000:00:01.0 version: 09 width: 32 bits clock: 33MHz capabilities: pci pm msi pciexpress normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:d000(size=4096) memory:db000000-dc0fffff ioport:c0000000(size=301989888) *-generic UNCLAIMED description: Unassigned class product: Illegal Vendor ID vendor: Illegal Vendor ID physical id: 0 bus info: pci@0000:01:00.0 version: ff width: 32 bits clock: 66MHz capabilities: bus_master vga_palette cap_list configuration: latency=255 maxlatency=255 mingnt=255 resources: memory:db000000-dbffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:d000(size=128) memory:dc000000-dc07ffff *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:47 memory:dc400000-dc7fffff memory:b0000000-bfffffff ioport:e000(size=64) *-communication description: Communication controller product: 6 Series/C200 Series Chipset Family MEI Controller #1 vendor: Intel Corporation physical id: 16 bus info: pci@0000:00:16.0 version: 04 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: driver=mei latency=0 resources: irq:48 memory:df00b000-df00b00f *-usb:0 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:16 memory:df008000-df0083ff *-multimedia description: Audio device product: 6 Series/C200 Series Chipset Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:49 memory:df000000-df003fff *-pci:1 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:c000(size=4096) memory:de600000-deffffff ioport:d4200000(size=10485760) *-pci:2 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:b000(size=4096) memory:ddc00000-de5fffff ioport:d3700000(size=10485760) *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: 48:5d:60:f2:2c:fd width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-24-generic firmware=N/A ip=192.168.1.6 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:ddc00000-ddc0ffff *-pci:3 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 4 vendor: Intel Corporation physical id: 1c.3 bus info: pci@0000:00:1c.3 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:43 ioport:a000(size=4096) memory:dd200000-ddbfffff ioport:d2c00000(size=10485760) *-usb description: USB controller product: FL1000G USB 3.0 Host Controller vendor: Fresco Logic physical id: 0 bus info: pci@0000:04:00.0 version: 04 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress xhci bus_master cap_list configuration: driver=xhci_hcd latency=0 resources: irq:19 memory:dd200000-dd20ffff *-pci:4 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 6 vendor: Intel Corporation physical id: 1c.5 bus info: pci@0000:00:1c.5 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:44 ioport:9000(size=4096) memory:dc800000-dd1fffff ioport:d2100000(size=10485760) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 06 serial: bc:ae:c5:5f:8a:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl_nic/rtl8168e-2.fw latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:46 ioport:9000(size=256) memory:d2104000-d2104fff memory:d2100000-d2103fff *-usb:1 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:df007000-df0073ff *-isa description: ISA bridge product: HM65 Express Chipset Family LPC Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 05 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: latency=0 *-storage description: SATA controller product: 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 logical name: scsi0 logical name: scsi2 version: 05 width: 32 bits clock: 66MHz capabilities: storage msi pm ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=0 resources: irq:45 ioport:e0b0(size=8) ioport:e0a0(size=4) ioport:e090(size=8) ioport:e080(size=4) ioport:e060(size=32) memory:df006000-df0067ff *-disk description: ATA Disk product: ST9750420AS vendor: Seagate physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 0002 serial: 5WS0A7QR size: 698GiB (750GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=e0c5913d *-volume:0 description: Windows FAT volume vendor: MSDOS5.0 physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 version: FAT32 serial: 4ce5-3acb size: 3004MiB capacity: 3004MiB capabilities: primary fat initialized configuration: FATs=2 filesystem=fat *-volume:1 description: EXT4 volume vendor: Linux physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 logical name: / version: 1.0 serial: c198cc2a-d86a-4460-a4d5-3fc0b21e439c size: 28GiB capacity: 28GiB capabilities: primary journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2012-03-15 16:53:54 filesystem=ext4 lastmountpoint=/ modified=2012-05-02 18:52:04 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=ordered mounted=2012-05-09 19:06:01 state=mounted *-volume:2 description: Windows NTFS volume physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 version: 3.1 serial: 4c1cdebc-ec09-2947-a3b5-c1f9f1cddc1c size: 152GiB capacity: 152GiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2011-02-22 16:02:47 filesystem=ntfs label=OS state=clean *-volume:3 description: Extended partition physical id: 4 bus info: scsi@0:0.0.0,4 logical name: /dev/sda4 size: 514GiB capacity: 514GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume:0 description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 10GiB capabilities: nofs *-logicalvolume:1 description: HPFS/NTFS partition physical id: 6 logical name: /dev/sda6 capacity: 504GiB *-cdrom description: DVD-RAM writer product: BD-MLT UJ240AS vendor: MATSHITA physical id: 1 bus info: scsi@2:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 1.00 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-serial UNCLAIMED description: SMBus product: 6 Series/C200 Series Chipset Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 05 width: 64 bits clock: 33MHz configuration: latency=0 resources: memory:df005000-df0050ff ioport:e040(size=32)

    Read the article

  • .htaccess template, suggestions needed.

    - by purpler
    I compiled myself a .htaccess template and would like to know whether the caching and compressions is set up right, constructive suggestions and critics needed. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US FileETag None Header unset ETag ServerSignature Off SetEnv TZ Europe/Belgrade # Rewrites Options +FollowSymLinks RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Redirect index to root RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.html\ HTTP/ RewriteRule ^(.*)index\.html$ /$1 [R=301,L] # Cache media files: ExpiresActive On ExpiresDefault A0 # Month <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> # Week <FilesMatch "\.(css|pdf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> # 10 Min <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # Do not cache <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Compress output <IfModule mod_deflate.c> <FilesMatch "\.(html|js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error Documents ErrorDocument 206 /error/206.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 500 /error/500.html # Prevent hotlinking RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?serpentineseo.com/.*$ [NC] RewriteRule \.(gif|jpg|png)$ http://www.serpentineseo.com/images/angryman.png [R,L] # Prevent offline browsers RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR] RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR] RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR] RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR] RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR] RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR] RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR] RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR] RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR] RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR] RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR] RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR] RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR] RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR] RewriteCond %{HTTP_USER_AGENT} ^HMView [OR] RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR] RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR] RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR] RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR] RewriteCond %{HTTP_USER_AGENT} ^larbin [OR] RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR] RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR] RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR] RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR] RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR] RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR] RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR] RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR] RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR] RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR] RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR] RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR] RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR] RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR] RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR] RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR] RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR] RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR] RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR] RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR] RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR] RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR] RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Wget [OR] RewriteCond %{HTTP_USER_AGENT} ^Widow [OR] RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR] RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Zeus RewriteRule ^.*$ http://www.google.com [R,L] # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Deny access to sensitive files <FilesMatch "\.(htaccess|psd|log)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Real Time BI in the Real World

    - by tobin.gilman(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} One of my favorite BI offerings from Oracle is a solution called Oracle Real Time Decisions.  Whenever I mention this product in customer meetings, eyes light up.  There are some fascinating examples of customers using it to up-sell, cross-sell, increase customer retention, and reduce risk in real time, with off the charts return on investment. I plan to share some of those stories in a future blog.  In this post however, I want to share some far more common real time analytics use case scenarios that are being addressed with widely deployed Oracle BI and data integration technologies Not all real time BI applications require continuous learning, predictive modeling, and data mining.  Many simply require the ability to integrate, aggregate, and access information that is current (typically within in few minutes or a few seconds).  The use cases are infinite.  A few I've seen: ·         Purchasing agents need to match demand against available inventory ·         Manufacturing planners need to monitor current parts and material against scheduled build plans ·         Airline agents need to match ticket demand against flight schedules, ·         Human resources managers need to track the status of global hiring requisitions against current headcount authorizations...you get the idea. One way of doing this is to run reports or federated queries directly against transactional systems.  That approach can be viable if you only need to access simple data sets on rare occasions.  High volume and complex queries can quickly bog down performance of mission critical transactional systems.  There is an architecturally simple way of solving the problem, and it's being applied by real companies around the world to solve real needs in real time.    Cbeyond is an Atlanta, GA based  provider of voice, data and mobile business applications delivers.  They deliver real time information to its call center agents  as they are interacting with their customers. The data they need resides in production CRM and other transactional systems, but  instead or reporting directly off the those systems, data is first moved to an operational data store (ODS).  Rather than running data intensive, time consuming, and performance degrading batch ETL routines to populate the ODS, Cbeyond uses Oracle Golden Gate software to incrementally capture and move only the changed records from log files of the transactional systems every few minutes.  There is no impact on transactional system performance, and the information needed by call center representatives is up to date.  Oracle Business Intelligence software presents the information to services reps in a rich, visual, and highly interactive format. Avea is similar to Cbeyond.  They are a telecommunications company who integrates billing and customer information in an ODS that is accessed by their call center agents in real time using Oracle Golden Gate and Oracle Business Intelligence.  They've taken it a step further by using the ODS to feed a data warehouse.  The operational data store provides the current information needed by call center agents during "in flight" customer interactions.  The data warehouse is used for more sophisticated analysis of historical data.  For maximum performance, both the ODS and data warehouse run on the Oracle Exadata Database Machine. These are practical illustrations of companies addressing real time reporting and analysis needs using established business intelligence/data warehousing methodologies and tools common to many IT departments.  If real time BI could benefit your organization, you may be already be closer than you thought to having the pieces in place to solving the problem.    Give us a shout if you are interested in learning more or if you have an interesting use or approach to real-time BI.

    Read the article

  • Best Practices Generating WebService Proxies for Oracle Sales Cloud (Fusion CRM)

    - by asantaga
    I've recently been building a REST Service wrapper for Oracle Sales Cloud and initially all was going well, however as soon as I added all of my Web Service proxies I started to get weird errors..  My project structure looks like this What I found out was if I only had the InteractionsService & OpportunityService WebService Proxies then all worked ok, but as soon as I added the LocationsService Proxy, I would start to see strange JAXB errors. Example of the error message Exception in thread "main" javax.xml.ws.WebServiceException: Unable to create JAXBContextat com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:164)at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:94)at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:281)at com.sun.xml.ws.client.WSServiceDelegate.buildRuntimeModel(WSServiceDelegate.java:762)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.buildRuntimeModel(WLSProvider.java:982)at com.sun.xml.ws.client.WSServiceDelegate.createSEIPortInfo(WSServiceDelegate.java:746)at com.sun.xml.ws.client.WSServiceDelegate.addSEI(WSServiceDelegate.java:737)at com.sun.xml.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:361)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.internalGetPort(WLSProvider.java:934)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate$PortClientInstanceFactory.createClientInstance(WLSProvider.java:1039)...... Looking further down I see the error message is related to JAXB not being able to find an objectFactory for one of its types Caused by: java.security.PrivilegedActionException: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 6 counts of IllegalAnnotationExceptionsThere's no ObjectFactory with an @XmlElementDecl for the element {http://xmlns.oracle.com/apps/crmCommon/activities/activitiesService/}AssigneeRsrcOrgIdthis problem is related to the following location:at protected javax.xml.bind.JAXBElement com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee.assigneeRsrcOrgId at com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee This is very strange... My first thoughts are that when I generated the WebService Proxy I entered the package name as "oracle.demo.pts.fusionproxy.servicename" and left the generated types as blank. This way all the generated types get put into the same package hierarchy and when deployed they get merged... Sounds resaonable and appears to work but not in this case..  To resolve this I regenerate the proxy but this time setting : Package name : To the name of my package eg. oracle.demo.pts.fusionproxy.interactionsRoot Package for Generated Types :  Package where the types will be generated to, e.g. oracle.demo.pts.fusionproxy.SalesParty.types When I ran the application now, it all works , awesome eh???? Alas no, there is a serious side effect. The problem now is that to help coding I've created a collection of helper classes , these helper classes take parameters which use some of the "generic" datatypes, like FindCriteria. e.g. This wont work any more public static FindCriteria createCustomFindCriteria(FindCriteria pFc,String pAttributes) Here lies a gremlin of a problem.. I cant use this method anymore, this is because the FindCriteria datatype is now being defined two, or more times, in the generated code for my project. If you leave the Root Package for types blank it will get generated to com.oracle.xmlns, and if you populate it then it gets generated to your custom package.. The two datatypes look the same, sound the same (and if this were a duck would sound the same), but THEY ARE NOT THE SAME... Speaking to development, they recommend you should not be entering anything in the Root Packages section, so the mystery thickens why does it work.. Well after spending sometime with some colleagues of mine in development we've identified the issue.. Alas different parts of Oracle Fusion Development have multiple schemas with the same namespace, when the WebService generator generates its classes its not seeing the other schemas properly and not generating the Object Factories correctly...  Thankfully I've found a workaround Solution Overview When generating the proxies leave the Root Package for Generated Types BLANK When you have finished generating your proxies, use the JAXB tool XJC and generate Java classes for all datatypes  Create a project within your JDeveloper11g workspace and import the java classes into this project Final bit.. within the project dependencies ensure that the JAXB/XJC generated classes are "FIRST" in the classpath Solution Details Generate the WebServices SOAP proxies When generating the proxies your generation dialog should look like this Ensure the "unwrap" parameters is selected, if it isn't then that's ok, it simply means when issuing a "get" you need to extract out the Element Generate the JAXB Classes using XJC XJC provides a command line switch called -wsdl, this (although is experimental/beta) , accepts a HTTP WSDL and will generate the relevant classes. You can put these into a single batch/shell script xjc -wsdl https://fusionservername:443/appCmmnCompInteractions/InteractionService?wsdlxjc -wsdl https://fusionservername443/opptyMgmtOpportunities/OpportunityService?wsdl Create Project in JDeveloper to store the XJC "generated" JAXB classes Within the project folder create a filesystem folder called "src" and copy the generated files into this folder. JDeveloper11g should then see the classes and display them, if it doesnt try clicking the "refresh" button In your main project ensure that the JDeveloper XJC project is selected as a dependancy and IMPORTANT make sure it is at the top of the list. This ensures that the classes are at the front of the classpath And voilà.. Hopefully you wont see any JAXB generation errors and you can use common datatypes interchangeably in your project, (e.g. FindCriteria etc)

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • Samba doesnt require password on xbmc but does on ubuntu

    - by Chris
    I have samba setup on a fedora 13 machine, and I use it to share with my xbmc client in the family room. When I set this up there no password or anything was required I merely entered in paths such as: smb://<host>/<share> and all worked. Now on my ubuntu 10.04 machine when I try to access the same hosts, for example through smbmount though I receive an error. smbmount //media/Music ~/Music/ # media is in my /etc/hosts and resolves to # correct IP address for the machine I receive error: operation not permitted after pressing enter when it prompts for password. Here is my entry from /etc/samba/smb.conf: [global] workgroup = WORKGROUP server string = Samba Server Version %v # log files split per-machine: log file = /var/log/samba/log.%m # maximum size of 50KB per log file, then rotate: max log size = 50 security = user passdb backend = tdbsam ; security = domain ; passdb backend = tdbsam ; realm = MY_REALM ; password server = <NT-Server-Name> ; security = user ; passdb backend = tdbsam ; domain master = yes ; domain logons = yes ; logon script = %m.bat ; logon script = %u.bat ; logon path = \\%L\Profiles\%u ; logon path = ; add user script = /usr/sbin/useradd "%u" -n -g users ; add group script = /usr/sbin/groupadd "%g" ; add machine script = /usr/sbin/useradd -n -c "Workstation (%u)" -M -d /nohome -s /bin/false "%u" ; delete user script = /usr/sbin/userdel "%u" ; delete user from group script = /usr/sbin/userdel "%u" "%g" ; delete group script = /usr/sbin/groupdel "%g" ; local master = no ; os level = 33 ; preferred master = yes ; wins support = yes ; wins server = w.x.y.z ; wins proxy = yes ; dns proxy = yes load printers = yes cups options = raw ; printcap name = /etc/printcap # obtain a list of printers automatically on UNIX System V systems: ; printcap name = lpstat ; printing = cups ; map archive = no ; map hidden = no ; map read only = no ; map system = no ; store dos attributes = yes #============================ Share Definitions ============================== [homes] comment = Home Directories browseable = no writable = yes ; valid users = %S ; valid users = MYDOMAIN\%S # Un-comment the following and create the netlogon directory for Domain Logons: ; [netlogon] ; comment = Network Logon Service ; path = /var/lib/samba/netlogon ; guest ok = yes ; writable = no ; share modes = no # Un-comment the following to provide a specific roving profile share. # The default is to use the user's home directory: ; [Profiles] ; path = /var/lib/samba/profiles ; browseable = no ; guest ok = yes # A publicly accessible directory that is read only, except for users in the # "staff" group (which have write permissions): ; [public] ; comment = Public Stuff ; path = /home/samba ; public = yes ; writable = yes ; printable = no ; write list = +staff [tv] comment = TV path = /media/Isos/tv public = yes writable = yes printable = no write list = +media [music] comment = Music path = /media/Storage/music/ public = yes writable = yes printable = no write list = +media [pictures] comment = Pictures path = /media/Storage/pictures public = yes writable = yes printable = no write list = +media

    Read the article

  • Informaton of pendriver with libudv on linux

    - by Catanzaro
    I'm doing a little app in C that read the driver information of my pendrive: Plugged it and typed dmesg: [ 7676.243994] scsi 7:0:0:0: Direct-Access Lexar USB Flash Drive 1100 PQ: 0 ANSI: 0 CCS [ 7676.248359] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 7676.256733] sd 7:0:0:0: [sdb] 7831552 512-byte logical blocks: (4.00 GB/3.73 GiB) [ 7676.266559] sd 7:0:0:0: [sdb] Write Protect is off [ 7676.266566] sd 7:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 7676.266569] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.285373] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.285383] sdb: sdb1 [ 7676.298661] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.298667] sd 7:0:0:0: [sdb] Attached SCSI removable disk with "udevadm info -q all -n /dev/sdb" P: /devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1/1-1:1.0/host7/target7:0:0/7:0:0:0/block/sdb N: sdb W: 36 S: block/8:16 S: disk/by-id/usb-Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 S: disk/by-path/pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 E: UDEV_LOG=3 E: DEVPATH=/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1/1-1:1.0/host7/target7:0:0/7:0:0:0/block/sdb E: MAJOR=8 E: MINOR=16 E: DEVNAME=/dev/sdb E: DEVTYPE=disk E: SUBSYSTEM=block E: ID_VENDOR=Lexar E: ID_VENDOR_ENC=Lexar\x20\x20\x20 E: ID_VENDOR_ID=05dc E: ID_MODEL=USB_Flash_Drive E: ID_MODEL_ENC=USB\x20Flash\x20Drive\x20 E: ID_MODEL_ID=a813 E: ID_REVISION=1100 E: ID_SERIAL=Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 E: ID_SERIAL_SHORT=AA5OCYQII8PSQXBB E: ID_TYPE=disk E: ID_INSTANCE=0:0 E: ID_BUS=usb E: ID_USB_INTERFACES=:080650: E: ID_USB_INTERFACE_NUM=00 E: ID_USB_DRIVER=usb-storage E: ID_PATH=pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 E: ID_PART_TABLE_TYPE=dos E: UDISKS_PRESENTATION_NOPOLICY=0 E: UDISKS_PARTITION_TABLE=1 E: UDISKS_PARTITION_TABLE_SCHEME=mbr E: UDISKS_PARTITION_TABLE_COUNT=1 E: DEVLINKS=/dev/block/8:16 /dev/disk/by-id/usb-Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 /dev/disk/by-path/pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 and my software is: Codice: Seleziona tutto #include <stdio.h> #include <libudev.h> #include <stdlib.h> #include <locale.h> #include <unistd.h> int main(void) { struct udev_enumerate *enumerate; struct udev_list_entry *devices, *dev_list_entry; struct udev_device *dev; /* Create the udev object */ struct udev *udev = udev_new(); if (!udev) { printf("Can't create udev\n"); exit(0); } enumerate = udev_enumerate_new(udev); udev_enumerate_add_match_subsystem(enumerate, "scsi_generic"); udev_enumerate_scan_devices(enumerate); devices = udev_enumerate_get_list_entry(enumerate); udev_list_entry_foreach(dev_list_entry, devices) { const char *path; /* Get the filename of the /sys entry for the device and create a udev_device object (dev) representing it */ path = udev_list_entry_get_name(dev_list_entry); dev = udev_device_new_from_syspath(udev, path); /* usb_device_get_devnode() returns the path to the device node itself in /dev. */ printf("Device Node Path: %s\n", udev_device_get_devnode(dev)); /* The device pointed to by dev contains information about the hidraw device. In order to get information about the USB device, get the parent device with the subsystem/devtype pair of "usb"/"usb_device". This will be several levels up the tree, but the function will find it.*/ dev = udev_device_get_parent_with_subsystem_devtype( dev, "block", "disk"); if (!dev) { printf("Errore\n"); exit(1); } /* From here, we can call get_sysattr_value() for each file in the device's /sys entry. The strings passed into these functions (idProduct, idVendor, serial, etc.) correspond directly to the files in the directory which represents the USB device. Note that USB strings are Unicode, UCS2 encoded, but the strings returned from udev_device_get_sysattr_value() are UTF-8 encoded. */ printf(" VID/PID: %s %s\n", udev_device_get_sysattr_value(dev,"idVendor"), udev_device_get_sysattr_value(dev, "idProduct")); printf(" %s\n %s\n", udev_device_get_sysattr_value(dev,"manufacturer"), udev_device_get_sysattr_value(dev,"product")); printf(" serial: %s\n", udev_device_get_sysattr_value(dev, "serial")); udev_device_unref(dev); } /* Free the enumerator object */ udev_enumerate_unref(enumerate); udev_unref(udev); return 0; } the problem is that i obtain in output: Device Node Path: /dev/sg0 Errore and dont view information. subsystem and the devtype i think that are inserted well : "block" and "disk". thanks for help. Bye

    Read the article

  • WIX installer with Custom Actions: "built by a runtime newer than the currently loaded runtime and cannot be loaded."

    - by Rimer
    I have a WIX installer that executes Custom Actions over the course of install. When I run the WIX installer, and it encounters its first Custom Action, the installer fails out, and I receive an error in the MSI log as follows: Action start 12:03:53: LoadBCAConfigDefaults. SFXCA: Extracting custom action to temporary directory: C:\DOCUME~1\ELOY06~1\LOCALS~1\Temp\MSI10C.tmp-\ SFXCA: Binding to CLR version v2.0.50727 Calling custom action WIXCustomActions!WIXCustomActions.CustomActions.LoadBCAConfigDefaults Error: could not load custom action class WIXCustomActions.CustomActions from assembly: WIXCustomActions System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. File name: 'WIXCustomActions' at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.AppDomain.Load(String assemblyString) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.GetCustomActionMethod(Session session, String assemblyName, String className, String methodName) ... the specific problem from above is "System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded." The wordage of that error seems to indicate something like an incorrectly referenced .NET framework or something (I'm targeting 3.5 in both my custom actions and its dependencies), but I can't figure out where to make a change to address this problem. Any ideas? .... Not sure if this will help but it's the CustomActions package batch file I run to create the .dll package containing the custom action functions: =============== call "C:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" @echo on cd "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions" csc /target:library /r:"C:\program files\windows installer xml v3.6\sdk\microsoft.deployment.windowsinstaller.dll" /r:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\eLoyalty.PortalLib.dll" /out:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" CustomActions.cs cd "C:\Program Files\Windows Installer XML v3.6\SDK" makesfxca "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\BatchCustomerAnalysisWIXCustomActionsPackage.dll" "c:\program files\windows installer xml v3.6\sdk\x86\sfxca.dll" "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" customaction.config Microsoft.Deployment.WindowsInstaller.dll

    Read the article

  • Debugging Messaging Exception

    - by rizza
    We have a batch program that incorporates JavaMail 1.2 that sends emails. In our development environment, we haven't got the chance to encounter the above mentioned exception. But in the client's environment, they had experienced this a lot of times with the following error trace: javax.mail.MessagingException: 550 Requested action not taken: NUL characters are not allowed. at com.sun.mail.smtp.SMTPTransport.issueCommand (SMTPTransport.java: 879) at com.sun.mail.smtp.SMTPTransport.finishData (SMTPTransport.java: 820) at com.sun.mail.smtp.SMTPTransport.sendMessage (SMTPTransport.java: 322) ... I'm not sure if this is connected to my problem, http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4697158. But trying JavaMail 1.4.2, I see that the content transfer encoding of the email is still 7bit, so I'm not sure if using JavaMail 1.4.2 could solve the problem. Please take note that I could only do testing in our development environment that hasn't been able to replicate this. With the above exception, how would i know if this is from the sender or the receiver side? What debugging steps could you suggest? EDIT: Here is a DEBUG of the actual sending (masked some information): DEBUG: not loading system providers in &lt;java.home&gt;</a>/lib DEBUG: not loading optional custom providers file: /META-INF/javamail.providers DEBUG: successfully loaded default providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name: {com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: not loading optional address map file: /META-INF/javamail.address.map DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth false DEBUG: SMTPTransport trying to connect to host "nnn.nnn.n.nnn", port nn DEBUG SMTP RCVD: 220 xxxx.xxxxxxxxxxx.xxx SMTP; Mon, 23 Mar 2009 15:18:57 +0800 DEBUG: SMTPTransport connected to host "nnn.nnn.n.nnn", port: nn DEBUG SMTP SENT: EHLO xxxxxxxxx DEBUG SMTP RCVD: 250 xxxx.xxxxxxxxxxx.xxx Hello DEBUG SMTP: use8bit false DEBUG SMTP SENT: MAIL FROM:<a href="newmsg.cgi?mbx=Main&[email protected]">&lt;[email protected]&gt;</a> DEBUG SMTP RCVD: 250 <a href="newmsg.cgi?mbx=Main&[email protected]">&lt;[email protected]&gt;</a>... Sender ok DEBUG SMTP SENT: RCPT TO:&lt;[email protected]&gt; DEBUG SMTP RCVD: 250 &lt;[email protected]&gt;... Recipient ok Verified Addresses &nbsp;&nbsp;[email protected] DEBUG SMTP SENT: DATA DEBUG SMTP RCVD: 354 Enter mail, end with "." on a line by itself DEBUG SMTP SENT: . DEBUG SMTP RCVD: 550 Requested action not taken: NUL characters are not allowed.

    Read the article

  • SQL 2008 R2 login/network issue

    - by martinjd
    I have a Windows Server 2008 R2 new clean install , not a VM, that I have added to a Windows Server 2003 based domain using my account which has domain admin rights. The domain functional level is 2003. I performed a clean install of SQL Server 2008 R2 using my account which has domain admin rights. The installation completed without any errors. I logged into SSMS locally and attempted to add another domain account by clicking Search, Advanced and finding the user in the domain. When I return to the "Dialog - New" window and click OK I receive the following error: Create failed for Login 'Domain\User'. (Microsoft.SqlServer.Smo) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Windows NT user or group 'Domain\User' not found. Check the name again. (Microsoft SQL Server, Error: 15401) I have verified that the firewall is off, tried adding a different domain user, tried using SA to add a user, installed the hotfix for KB 976494 and verified that the Local Security Policy for Domain Member: Digitally encrypt or sign secure channel Domain Member: Digitally encrypt secure channel Domain Member: Digitally sign secure channel are disabled none of which have made a difference. I can RDP to a Server 2003 server running SQL 2008 and add the same domain user without issue. Also if I try to connect with SSMS to the sql server from another system on the domain using my account I get the following error: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) and on the database server I see the following in the security event log: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: myUserName Account Domain: MYDOMAIN Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000018d Sub Status: 0x0 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: MYWKS Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I am sure that the "NULL SID" has some significant meaning but have no idea at this point what the issue could be.

    Read the article

  • Using FFMPEG to reliably convert videos to mp4 for iphone/ipod and flash players

    - by Jake Stevenson
    I need to convert videos for use in both a flash player and the iphone/ipod touch. I'm using the following batch script with ffmpeg: @echo off ffmpeg.exe -i %1 -s qvga -acodec libfaac -ar 22050 -ab 128k -vcodec libx264 -threads 0 -f ipod %2 This always outputs an mp4 file, and I can always play it on my PC. The videos also seem to play fine on my iphone 3GS. But with some input files it won't work for older iphone versions (3G and iPod touch). Here's the ffmpeg output from one such file: D:\ffmpeg>encode.bat d:\temp\recording.flv d:\temp\out.m4v FFmpeg version SVN-r18709, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-libfaad --enable-pthreads --enable-libvorbis --enable-libtheo ra --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libxvid - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.27. 0 / 52.27. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 28 2009 04:04:42, gcc: 4.2.4 [flv @ 0x187d650]skipping flv packet: type 18, size 164, flags 0 Input #0, flv, from 'd:\temp\recording.flv': Duration: 00:00:07.17, start: 0.001000, bitrate: N/A Stream #0.0: Video: flv, yuv420p, 320x240, 1k tbr, 1k tbn, 1k tbc Stream #0.1: Audio: nellymoser, 44100 Hz, mono, s16 [libx264 @ 0x13518b0]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE 4.2 [libx264 @ 0x13518b0]profile Baseline, level 4.2 Output #0, ipod, to 'd:\temp\out.m4v': Stream #0.0: Video: libx264, yuv420p, 320x240, q=2-31, 200 kb/s, 1k tbn, 1k tbc Stream #0.1: Audio: libfaac, 22050 Hz, mono, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 90 fps= 0 q=-1.0 Lsize= 128kB time=6.87 bitrate= 152.4kbits/s video:92kB audio:32kB global headers:1kB muxing overhead 2.620892% [libx264 @ 0x13518b0]slice I:8 Avg QP:29.62 size: 7047 [libx264 @ 0x13518b0]slice P:82 Avg QP:30.83 size: 467 [libx264 @ 0x13518b0]mb I I16..4: 17.9% 0.0% 82.1% [libx264 @ 0x13518b0]mb P I16..4: 0.6% 0.0% 0.0% P16..4: 23.1% 0.0% 0.0% 0.0% 0.0% skip:76.3% [libx264 @ 0x13518b0]final ratefactor: 57.50 [libx264 @ 0x13518b0]SSIM Mean Y:0.9544735 [libx264 @ 0x13518b0]kb/s:8412.6 My suspicion is that it has something to do with the audio encoding. If so, does anyone know how to force it to reencode the audio to the proper format? Any other ideas?

    Read the article

  • Flash Builder missing fundamental things

    - by Bart van Heukelom
    All of a sudden Flash Builder 4 is missing all kinds of fundamental things and is generating incorrect errors. I've had the same issue yesterday, where I fixed it by downloading a new Flex SDK and importing that into FB. I did this again, but this time it fixed nothing. I don't think it's something I did, like removing critical references from the build path. The errors also appeared on projects I was not working on at the time. It occurs for ActionScript, Flex and Flex Library projects alike. Update: I find this stack trace in the Flash Builder error log: !ENTRY com.adobe.flexbuilder.project 4 43 2010-05-11 11:55:47.495 !MESSAGE Uncaught exception in compiler !STACK 0 java.lang.NullPointerException at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2592) at macromedia.asc.parser.VariableBindingNode.evaluate(VariableBindingNode.java:64) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2233) at macromedia.asc.parser.ListNode.evaluate(ListNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2578) at macromedia.asc.parser.VariableDefinitionNode.evaluate(VariableDefinitionNode.java:48) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2503) at macromedia.asc.parser.WithStatementNode.evaluate(WithStatementNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2891) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2905) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3643) at macromedia.asc.parser.ClassDefinitionNode.evaluate(ClassDefinitionNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3371) at macromedia.asc.parser.ProgramNode.evaluate(ProgramNode.java:80) at flex2.compiler.as3.As3Compiler.analyze4(As3Compiler.java:709) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:3089) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:2977) at flex2.compiler.CompilerAPI.batch2(CompilerAPI.java:528) at flex2.compiler.CompilerAPI.batch(CompilerAPI.java:1274) at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1496) at flex2.tools.oem.Application.compile(Application.java:1188) at flex2.tools.oem.Application.recompile(Application.java:1133) at flex2.tools.oem.Application.compile(Application.java:819) at flex2.tools.flexbuilder.BuilderApplication.compile(BuilderApplication.java:344) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder$MyBuilder.mybuild(ASApplicationBuilder.java:276) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder.build(ASApplicationBuilder.java:127) at com.adobe.flexbuilder.multisdk.compiler.internal.ASBuilder.build(ASBuilder.java:190) at com.adobe.flexbuilder.multisdk.compiler.internal.ASItemBuilder.build(ASItemBuilder.java:74) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.buildItem(FlexProjectBuilder.java:480) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.build(FlexProjectBuilder.java:306) at com.adobe.flexbuilder.project.compiler.internal.FlexIncrementalBuilder.build(FlexIncrementalBuilder.java:157) at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:627) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:170) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:201) at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:253) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:256) at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:309) at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:341) at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:140) at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:238) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • Eclipselink factory.createEntityManager() stalls with more than one instance running

    - by Raven
    Hi, with my RCP program I have the problem that I want to have more than one copy running on my PC. The first instance runs very good. If I start the second instance, everything is fine until I want to access the database. Using this code: .. Map properties = new HashMap(); properties.put("javax.persistence.jdbc.driver", dbDriver); properties.put("javax.persistence.jdbc.url", dbUrl); properties.put("javax.persistence.jdbc.user", dbUser); properties.put("javax.persistence.jdbc.password", dbPass); try { factory = Persistence.createEntityManagerFactory( PERSISTENCE_UNIT_NAME, properties); } catch (Exception e) { System.out.println(e); } em=factory.createEntityManager(); // the second instance stops here ... with an persistence.xml of <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="default"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <class>myclasshere</class> <properties> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.jdbc.read-connections.min" value="1" /> <property name="eclipselink.jdbc.write-connections.min" value="1" /> <property name="eclipselink.jdbc.batch-writing" value="JDBC" /> <property name="eclipselink.logging.level" value="SEVERE" /> <property name="eclipselink.logging.timestamp" value="false" /> <property name="eclipselink.logging.session" value="false" /> <property name="eclipselink.logging.thread" value="false" /> </properties> </persistence-unit> </persistence> The program stalls / does not continue in the second instance beyond this step: em=factory.createEntityManager(); Debugging the program step by step showed me, that nothing happens at this point. Perhaps the program runs into a timeout. I havent been wainting for more than 1 minute.... Do you have any clues what might cause this stall might?

    Read the article

  • MySql multiple selects batching in .net

    - by Amith George
    I have a situation in my application. For each x-axis point in my chart, I am plotting 5 y-axis values. To calculate each of these 5 values, I need to make 4 different queries. Ie, for each x-axis point I need to fire 20 sql queries. Now, I need to plot 40 such points in the my chart. Its resulting in a pathetic performance where it takes close to a minute to get all the data back from the database. Each of 4 different queries consists of a join between 2 tables. One has only 6 rows. The other close to 10,000. Each of the 4 queries has different WHERE clauses, so they are different queries. For each point in the x-axis, only the values for the where clauses change. I have tried combining each of the 4 queries into one big string. Basically batch the four selects. These are again batched for each y-axis value. So, for each x-axis point, I am now firing one big command that consists of 20 different select statements. Technically, I should be experiencing a big performance boost, right? Instead of hitting the db 40x5x4 = 800 times, I am now hitting it just 40 times. But instead of taking 60 seconds, it taking 50-55 seconds... not much of a help. I am using MySql 5.1, and the 6.1 version of its .Net connector. What can I do to improve the performance? Edit: One of the 4 queries is as follows: SELECT SUM(TIME_TO_SEC(TIMEDIFF(T1.col2, T1.col1))* T2.col1 / (3600 *1000)) AS TotalTime FROM Table T1 JOIN Table T2 ON T1.col3 = T2.col3 WHERE T1.col4 = 'i' AND T1.col1 >= '2009-12-25 00:00:00' AND T1.col2 <= '2009-12-26 00:00:00'; The other 3 queries are similar, only the where clause changes slightly. This set of 4 queries is fired 5 times. The first 3 times against the join of table T1 and T2, passing in different values for col4. And the next two times against the join of table T3 and T2 passing in different values for col4. These 5 values are the y-axis values for a particular x-axis point. The data returned by all these queries is the same format. so, we tried doing a UNION ALL on all these queries. No substantial difference. One strange thing, however, after indexing the foreign key on the table T1 [while it contained over a lakh records], the queries were using the index, but they had become slower. At times, the queries would take double the time to return the data.

    Read the article

  • Contract developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Bottom button bar overlaps the last element of Listview!!

    - by elto
    I have a listview which is part of an Activity. I want user to have a choice for batch deleting the items in the listview, so when he chooses the corresponding option from the menu, every list item gets a checkbox next to it. When user clicks any checkbox, a button bar is to slide up from bottom (as in gmail app) and clicking delete button deletes the selected items, however clicking cancel button on the bar would uncheck all the checked items. This is my page layout.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@android:color/transparent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <LinearLayout android:id="@+id/list_area" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > <ListView android:id="@+id/mylist" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@android:color/transparent" android:drawSelectorOnTop="false" android:layout_weight="1" /> <TextView android:id="@+id/empty_list_message" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="#FFFFFF" android:layout_gravity="center_vertical|center_horizontal" android:text="@string/msg_for_emptyschd" android:layout_margin="14dip" android:layout_weight="1" /> </LinearLayout> <RelativeLayout android:id="@+id/bottom_action_bar" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@drawable/schedule_bottom_actionbar_border" android:layout_marginBottom="2dip" android:layout_gravity="bottom" android:visibility="gone" > <Button android:id="@+id/delete_selecteditems_button" android:text="Deleted Selected" android:layout_width="140dip" android:layout_height="40dip" android:layout_alignParentLeft="true" android:layout_marginLeft="3dip" android:layout_marginTop="3dip" /> <Button android:id="@+id/cancel_button" android:text="Cancel" android:layout_width="140dip" android:layout_height="40dip" android:layout_alignParentRight="true" android:layout_marginRight="3dip" android:layout_marginTop="3dip" /> </RelativeLayout> </FrameLayout> </LinearLayout> so far, I have got everything working except that when the bottom bar becomes visible upon checkbox selection, it overlaps the last element of the list. All other list items can be scrolled up, but you cant scroll up the very last item of the list, therefore user can not select that item if he intends to. Here is the screenshot of the overlap. I have tried using the listview footer option, but that appends the bar to the end of the list instead of keeping it fixed at the bottom of the screen. Is there a way I could "raise" the listview enough so that the overlap wont happen?? BTW, I have already tried adding the bottom-margin to the listview itself, or the LinearLayout wrapping the listview right before making the button-bar visible, but it introduces other bugs like clicking one checkbox checks some another checkbox in listview.

    Read the article

  • T-SQL - Left Outer Joins - Filters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4) EDIT: A bit more context - depending on where I put the filter I different results. As stated above the where clause gives me the correct result in one state and the ON in the other. I am looking for a consistent way of doing this. Where - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL On - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number AND table1.groupby = 2 --****************************** WHERE table2.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 1 1 NULL 2 2 NULL 1 2 NULL Where (table 2 this time) - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 On - select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number --****************************** WHERE table1.number IS NULL AND table1.groupby = 2 Result: T1_Groupby T1_Number T2_Number ----------- ----------- ----------- (0) rows returned

    Read the article

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • PerlIO in Windows PowerShell and CMD.exe

    - by Evan Carroll
    Apparently, a Perl script I have results in two different output files depending on if I run it under Windows PowerShell, or cmd.exe. The script can be found at the bottom of this question. The file handle is opened with IO::File, I believe that PerlIO is doing some screwy stuff. It seems as if under cmd.exe the encoding chosen is much more compact encoding (4.09 KB), as compared to PowerShell which generates a file nearly twice the size (8.19 KB). This script takes a shell script and generates a Windows batch file. It seems like the one generated under cmd.exe is just regular ASCII (1 byte character), while the other one appears to be UTF-16 (first two bytes FF FE) Can someone verify and explain why PerlIO works differently under Windows Powershell than cmd.exe? Also, how do I explicitly get an ASCII-magic PerlIO filehandle using IO::File? Currently, only the file generated with cmd.exe is executable. The UTF-16 .bat (I think that's the encoding) is not executable by either PowerShell or cmd.exe. BTW, we're using Perl 5.12.1 for MSWin32 #!/usr/bin/env perl use strict; use warnings; use File::Spec; use IO::File; use IO::Dir; use feature ':5.10'; my $bash_ftp_script = File::Spec->catfile( 'bin', 'dm-ftp-push' ); my $fh = IO::File->new( $bash_ftp_script, 'r' ) or die $!; my @lines = grep $_ !~ /^#.*/, <$fh>; my $file = join '', @lines; $file =~ s/ \\\n/ /gm; $file =~ tr/'\t/"/d; $file =~ s/ +/ /g; $file =~ s/\b"|"\b/"/g; my @singleLnFile = grep /ncftp|echo/, split $/, $file; s/\$PWD\///g for @singleLnFile; my $dh = IO::Dir->new( '.' ); my @files = grep /\.pl$/, $dh->read; say 'echo off'; say "perl $_" for @files; say for @singleLnFile; 1;

    Read the article

  • Would Like Multiple Checkboxes to Update World PNG Image Using Mogrify to FloodFill Countries With C

    - by socrtwo
    Hi. I'm seeing in another forum if the best way to do this is with Javascript or Ajax but I'm wondering if there is an even easier simpler way. I'm trying to create a web service where users can check which countries they have visited from a list of 175 or so and a World map image would then instantly update with a filled color. There are other similar services, but I'm envisioning mine to be both updating from checks in checkboxes and by clicking on the target country in the displayed image say with an imagemap. Additionally other solutions display all the visited countries in the same color. I would like different colors for different countries or at least for those countries that touch. Eventually I would like to include a feature that enables the choice of which colors to assign countries. I found a Sourceforge project called pwmfccd. It's simply an open source image of the world and the coordinates on the PNG image for all the countries. You can use mogrify from ImageMagick and floodfill to fill the countries with color. I have done this successfully, locally with batch files. My ISP has told me where mogrify is located, basically "/usr/bin/mogrify". I now have a horrendously complicated cgi script which if it worked is set to redraw the world map image with each checkbox. It's here. It also redraws the whole web page with each check. The web page starts here. Of course this is not at all efficient, and I think probably the real way to go is Ajax or Javascript, so that maybe just the image gets changed and redrawn, not the whole web page. Sorry I don't even know the difference between Javascript and Ajax and their relative merits at this point. I suppose you could make just one part of the image update with each check or click on the image instead of even just the image redrawing, but I have never even heard of a hint at being able to do that for irregularly shaped image elements like countries. So I guess an Image map and sister checkbox entries tied to mogrify events redrawing the user's personal copy of the image with an image refresh would be the only way to go. So how do you do this with something other than Javascript or Ajax or is that definitely the way to go and if so, how would you do it? Or can you after all cut up a web based image into irregular puzzle shaped piece which you can redraw individually at will. Thanks in advance for reading and considering answering this post.

    Read the article

  • Question about the code of the backend of symfony

    - by user248959
    Hi, this is the index action and template generated at the backend for the model "coche". public function executeIndex(sfWebRequest $request) { // sorting if ($request->getParameter('sort') && $this->isValidSortColumn($request->getParameter('sort'))) { $this->setSort(array($request->getParameter('sort'), $request->getParameter('sort_type'))); } // pager if ($request->getParameter('page')) { $this->setPage($request->getParameter('page')); } $this->pager = $this->getPager(); $this->sort = $this->getSort(); } This is the index template: <?php use_helper('I18N', 'Date') ?> <?php include_partial('coche/assets') ?> <div id="sf_admin_container"> <h1><?php echo __('Coche List', array(), 'messages') ?></h1> <?php include_partial('coche/flashes') ?> <div id="sf_admin_header"> <?php include_partial('coche/list_header', array('pager' => $pager)) ?> </div> <div id="sf_admin_bar"> <?php include_partial('coche/filters', array('form' => $filters, 'configuration' => $configuration)) ?> </div> <div id="sf_admin_content"> <form action="<?php echo url_for('coche_coche_collection', array('action' => 'batch')) ?>" method="post"> <?php include_partial('coche/list', array('pager' => $pager, 'sort' => $sort, 'helper' => $helper)) ?> <ul class="sf_admin_actions"> <?php include_partial('coche/list_batch_actions', array('helper' => $helper)) ?> <?php include_partial('coche/list_actions', array('helper' => $helper)) ?> </ul> </form> </div> <div id="sf_admin_footer"> <?php include_partial('coche/list_footer', array('pager' => $pager)) ?> </div> </div> In the template there is this line: include_partial('coche/filters', array('form' => $filters, 'configuration' => $configuration)) ?> but i can not find the variables $this-filters and $this-configuration in the index action. How is that possible? Javi

    Read the article

  • How to integrate Purify into Hudson CI?

    - by Martin
    Hello everybody! I have a Hudson CI system set up and for the moment it is used for building a project and running some unit tests. My next step is to integrate the memory leak detector Purify into the build cycle. Now I want to start the unit tests also inside purify and for this I have created a new batch task which runs following command: purify.exe /SaveTextData MyExecutable.exe --test TestLibrary.dll --output xml As I read in the Purify documentation the /SaveTextData option is used in order to run purify not in GUI mode. If I run this command on my local workstation in the command line it works perfectly. But in case it is started by Hudson, nothing happens. Unfortunetly there are no logs of purify... Has someone ever tried to start purify either by Hudson or any other CI system? Thanks in advance. Best regards Martin EDIT: I forgot to tell you, that I have Hudson running as master and slave on different computers. On the master I have configured a task which should start the unit tests within purify on the slave. I am running the slave via JNLP. EDIT 18.03.2010: Ok, so finally I am a bit closer the source of the problem. I have discovered, that running my unit tests in purify locally the log file EngineCmdLine.log contains three commands. I am starting purify with following command: purify.exe /SaveTextData TestRunnerConsoleWD.exe --test TestDemoWD.dll Output of EngineCmdLine.log when starting purify manually: File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestRunnerConsoleWD.exe File: C:\WINDOWS\system32\ws2_32.dll File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll Output when starting via Hudson: File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestRunnerConsoleWD.exe File: C:\WINDOWS\system32\ws2_32.dll File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll The error output of purify: Instrumenting: BtcTestDemoWD.dll 313856 bytes Purify: While processing file D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TESTFWWD.DLL: Error: Cannot replace file c:\Programme\IBM\RationalPurifyPlus\PurifyPlus\cache\BTCTESTFWWD$Purify_D_workspace_hudson_workspace_Purify_TestFW_CommonsCoreTest_Cpp_msvs9.DLL. Is it in use? TESTFWWD.DLL 505344 bytes Unable to instrument D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll (0x1) Question is, why is purify starting twice a command with the TestDemoWD.dll library?

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >