Search Results

Search found 17595 results on 704 pages for 'log'.

Page 531/704 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • No sound after clean install 11.10

    - by Jorge
    First of all, sorry to ask this, I'm sure that this was asked so many times before. Second, sorry for the English, it's not my native language. And Third, thank you in advance. So, I hope the follow info will help, here's a log. http://www.alsa-project.org/db/?f=07089caf530494bc4bc23e1d1cd56b3a5fae03c6 I already check 'System - Preferences - Sound'. Here's a screenshot http://i.imgur.com/Ghwnj.png > jorge@jorge-desktop:~$ sudo lshw -class multimedia > *-multimedia > description: Multimedia audio controller > product: VT8233/A/8235/8237 AC97 Audio Controller > vendor: VIA Technologies, Inc. > physical id: 11.5 > bus info: pci@0000:00:11.5 > version: 60 > width: 32 bits > clock: 33MHz > capabilities: pm cap_list > configuration: driver=VIA 82xx Audio latency=0 > resources: irq:22 ioport:e400(size=256) Tried with no results: > sudo apt-get remove --purge alsa-base > sudo apt-get remove --purge pulseaudio > sudo apt-get clean && sudo apt-get autoremove > sudo apt-get install alsa-base > sudo apt-get install pulseaudio > sudo apt-get install ubuntu-desktop Also > sudo gedit /etc/default/grub > > from: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" > > to: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash radeon.audio=1" > > sudo update-grub > > And Reboot... without any result. EDIT: I made sure that everything it's fine with aplay -l and lspci -v and lsmod; and checked alsamixer, it's not in mute. Well I'm running out of ideas. Thanks.

    Read the article

  • Oracle's Integrated Systems Management and Support Experience

    - by Scott McNeil
    With its recent launch, Oracle Enterprise Manager 11g introduced a new approach to integrated systems management and support. What this means is taking both areas of IT management and vendor support and combining them into one integrated comprehensive and centralized platform. Traditional Ways Under the traditional method, IT operational teams would often focus on running their systems using management tools that weren’t connected to their vendor’s support systems. If you needed support with a product, administrators would often contact the vendor by phone or visit the vendor website for support and then log a service request in order to fix the issues. This method was also very time consuming, as administrators would have to collect their software configurations, operating systems and hardware settings, then manually enter them into an online form or recite them to a support analyst on the phone. For the vendor, they had to analyze all the configuration data to recreate the problem in order to solve it. This approach was very manual, uncoordinated and error-prone where duplication between the customer and vendor frequently occurred. A Better Support Experience By removing the boundaries between support, IT management tools and the customer’s IT infrastructure, Oracle paved the way for a better support experience. This was achieved through integration between Oracle Enterprise Manager 11g and My Oracle Support. Administrators can not only manage their IT infrastructure and applications through Oracle Enterprise Manager’s centralized console but can also receive proactive alerts and patch recommendations right within the console they use day-in-day-out. Having one single source of information saves time and potentially prevents unforeseen problems down the road. All for One, and One for All The first step for you is to allow Oracle Enterprise Manager to upload configuration data into Oracle’s secure configuration repository, where it can be analyzed for potential issues or conflicts for all customers. A fix to a problem encountered by one customer may actually be relevant to many more. The integration between My Oracle Support and Oracle Enterprise Manager allows all customers who may be impacted by the problem to receive a notification about the fix. Once the alert appears in Oracle Enterprise Manager’s console, the administrator can take his/her time to do further investigations using automated workflows provided in Oracle Enterprise Manager to analyze potential conflicts. Finally, administrators can schedule a time to test and automatically apply the fix to all the systems that need it. In the end, this helps customers maintain their service levels without compromise and avoid experiencing unplanned downtime that may result from potential issues or conflicts. This new paradigm of integrated systems management and support helps customers keep their systems secure, compliant, and up-to-date, while eliminating the traditional silos between IT management and vendor support. Oracle’s next generation platform also works hand-in-hand to provide higher quality of service to business users while at the same time making life for administrators less complicated. For more information on Oracle’s integrated systems management and support experience, be sure to visit our Oracle Enterprise Manager 11g Resource Center for the latest customer videos, webcast, and white papers.

    Read the article

  • PPTP VPN connects via NM but goes down during SSH connection

    - by Andrea Olivato
    I setup a VPN PPTP connection via network manager and it connects correctly (I see the lock near the notification icon and the message "Vpn connection has been successfully...") As soon as I try to perform any SSH connection via the established tunnel the connection itself goes down with the message "Vpn connection failed". the SSH connection always fails at debug1: SSH2_MSG_KEXINIT sent I've looked into the system logs and this is the log Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> Starting VPN service 'pptp'... Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 7093 Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' appeared; activating connections Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: init (1) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: starting (3) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (Connect) reply received. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP4 Config Get) reply received from old-style plugin. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN Gateway: 5.98.141.210 Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP Config Get) complete. Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: started (4) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopping (5) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopped (6) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state change reason: 0 Dec 12 12:25:15 ushuaia NetworkManager[1155]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. Dec 12 12:25:20 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' disappeared Please note that the same vpn is configured on my colleagues Windows 7 and works without problem when they use putty to connect via SSH

    Read the article

  • Data Structures usage and motivational aspects

    - by Aubergine
    For long student life I was always wondering why there are so many of them yet there seems to be lack of usage at all in many of them. The opinion didn't really change when I got a job. We have brilliant books on what they are and their complexities, but I never encounter resources which would actually give a good hint of practical usage. I perfectly understand that I have to look at problem , analyse required operations, look for data structure that does them efficiently. However in practice I never do that, not because of human laziness syndrome, but because when it comes to work I acknowledge time priority over self-development. Over time I thought that when I would be better developer I will automatically use more of them - that didn't happen at all or maybe I just didn't. Then I found that the colleagues usually in the same plate as me - knowing more or less some three of data structures and being totally happy about it and refusing to discuss this matter further with me, coming back to conversations about 'cool new languages' 'libraries that do jobs for you' and the joy to work under scrumban etc. I am stuck with ArrayLists, Arrays and SortedMap , which no matter what I do always suffice or either I tweak them to be capable of fulfilling my task. Yes, it might be inefficient but do we really have to care if Intel increases performance over years no matter if we improve our skills? Does new Xeon or IBM machines really care what we use? What if I like build things, but I am not particularly excited whether it is n log(n) or just n? Over twenty years the processing power increased enormously, which gives us freedom of not being critical about which one to use? On top of that new more optimized languages appear which support multiple cores more efficiently. To be more specific: I would like to find motivational material on complex real areas/cases of possible effective usages of data structures. I would be really grateful if you would provide relevant resources. There is similar question ,but in the end the links again mostly describe or do dumb example(vehicles, students or holy grail quest - yes, very relevant) them and people keep referring to the "scenario decides the data structure to use". I want to know these complex scenarios to be able to identify similarities to my scenario and then use them. The complex scenarios where it really matters and not necessarily of quantitive nature. It seems that data structures only concern is efficiency and nothing else? There seems to be no particular convenience for developer in use one over another. (only when I found scientific resources on why exactly simple carbohydrates are evil I stopped eating sugar and candies completely replacing it with less harmful fruits - I hope you can see the analogy)

    Read the article

  • Oracle CRM For Public Sector, Commercial Business, Education

    - by michael.seback
    Chongqing Transport Commission Improves Management of Transport Projects The Chongqing Transport Commission is responsible for public passenger, road, and waterway transport in urban and rural areas of Chongqing. The commission administers the region's road and water industry; oversees the construction of transport infrastructure; and manages civil aviation, railroads, roads, waterways, ports, and wharves. "After studying the IT initiatives of other provincial transport commissions, we decided to use Siebel Public Sector to build our integrated transport service system. The Siebel software offers powerful functions that allow us to integrate information and improve the management of our road, rail, and waterway infrastructure projects." - Chen Xiaoming, Vice Director, Information Center, Chongqing Transport Commission. Read more here. Siemens Information Services Increases Productivity by 20% Siemens Information Services Pvt, Ltd. provides back-office account processing services to Siemens' vendors. The company works with Siemens' healthcare, energy, and industry divisions in Europe, the United States, and parts of the Asia-Pacific region. It approves financial services such as processing payroll, accounts data, purchase orders, invoices, and payments, and also creates service catalogs for customers and internal teams. "Oracle CRM ON Demand provides us with a complete view of each customer's data from the moment they log a request to the time we close it. This has eliminated manual requests, and improved the service we offer to our clients across the Asia-Pacific region." -Sunil Zutshi, General Manager, IT, Siemens Information Services Pvt, Ltd. Read more here. China Distance Education Holdings Improves Call Center Productivity by 24% China Distance Education Holdings Limited is a leading provider of online education. The organization offers 174 courses through 16 Web sites, including accounting, healthcare, law, and engineering. In 2010, 215,000 students were enrolled. "Online education is a fast growing sector in China. To maintain our competitiveness, we implemented Oracle Contact Center Anywhere to make it easier and faster for our call center staff to respond to student enquiries. As a result, their productivity increased by 24%." - Qin Songjiang, Chief Technology Officer, China Distance Education Holdings Limited. Read more here.

    Read the article

  • Bluetooth firmware problem in Ubuntu 13.04

    - by chanzerre
    I have a [Dell Inspiron][1] 15R 5520 laptop. Bluetooth is not working at all. rfkill list all gives 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no dmesg|grep -i bluetooth gives [ 13.644428] Bluetooth: Core ver 2.16 [ 13.644445] Bluetooth: HCI device and connection manager initialized [ 13.644453] Bluetooth: HCI socket layer initialized [ 13.644455] Bluetooth: L2CAP socket layer initialized [ 13.644461] Bluetooth: SCO socket layer initialized [ 15.861363] Bluetooth: hci0 command 0x1003 tx timeout [ 15.903443] Bluetooth: can't load firmware, may not work correctly [ 17.332535] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 17.332538] Bluetooth: BNEP filters: protocol multicast [ 17.332544] Bluetooth: BNEP socket layer initialized [ 17.393768] Bluetooth: RFCOMM TTY layer initialized [ 17.393781] Bluetooth: RFCOMM socket layer initialized [ 17.393783] Bluetooth: RFCOMM ver 1.11 hciconfig gives hci0: Type: BR/EDR Bus: USB BD Address: E0:06:E6:D5:DB:46 ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN ISCAN RX bytes:687 acl:0 sco:0 events:56 errors:0 TX bytes:2024 acl:0 sco:0 commands:52 errors:0 I have visited the site http://wireless.kernel.org/en/users/Drivers/b43 and according to it lspci -vnn -d 14e4: gives 08:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01) Subsystem: Dell Wireless 1704 802.11n + BT 4.0 [1028:0016] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at c1500000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> Kernel driver in use: wl So I got my PCI-ID as 14e4:4365 which it says is not supported. The alternative is wl. What should I do? My Wi-Fi is working normally without any problems, but Bluetooth is not working. sudo dpkg -i wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb gives following error (Reading database ... 208543 files and directories currently installed.) Unpacking wireless-bcm43142-dkms (from wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb) ... Setting up wireless-bcm43142-dkms (6.20.55.19-1) ... Loading new wireless-bcm43142-6.20.55.19 DKMS files... Building only for 3.8.0-23-generic Building initial module for 3.8.0-23-generic Traceback (most recent call last): File "/usr/share/apport/package-hooks/dkms_packages.py", line 22, in <module> import apport ImportError: No module named apport Error! Bad return status for module build on kernel: 3.8.0-23-generic (x86_64) Consult /var/lib/dkms/wireless-bcm43142/6.20.55.19/build/make.log for more information.

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • BizTalk Send Ports, Delivery Notification and ACK / NACK messages

    - by Robert Kokuti
    Recently I worked on an orchestration which sent messages out to a Send Port on a 'fire and forget' basis. The idea was that once the orchestration passed the message to the Messagebox, it was left to BizTalk to manage the sending process. Should the send operation fail, the Send Port got suspended, and the orchestration completed asynchronously, regardless of the Send Port success or failure. However, we still wanted to log the sending success, using the ACK / NACK messages. On normal ports, BizTalk generates ACK / NACK messages back to the Messagebox, if the logical port's Delivery Notification property is set to 'Transmitted'. Unfortunately, this setting also causes the orchestration to wait for the send port's result, and should the Send Port fail, the orchestration will also receive a 'DeliveryFailureException' exception. So we may end up with a suspended port and a suspended orchestration - not the outcome wanted here, there was no value in suspending the orchestration in our case. There are a couple of ways to fix this: 1. Catch the DeliveryFailureException  (full type name Microsoft.XLANGs.BaseTypes.DeliveryFailureException) and do nothing in the orchestration's exception block. Although this works, it still slows down the orchestration as the orchestration still has to wait for the outcome of the send port operation. 2. Use a Direct Port instead, and set the ACK request on the message Context, prior passing to the port: msgToSend(BTS.AckRequired) = true; This has to be done in an expression shape, as a Direct logical port does not have Delivery Notification property - make sure to add a reference to Microsoft.BizTalk.GlobalPropertySchemas. Setting this context value in the message will cause the messaging agent to create an appropriate ACK or NACK message after the port execution. The ACK / NACK messages can be caught and logged by dedicated Send Ports, filtering on BTS.AckType value (which is either ACK or NACK). ACK/NACK messages are treated in a special way by BizTalk, and a useful feature is that the original message's context values are copied to the ACK/NACK message context - these can be used for logging the right information. Other useful context properties of the ACK/NACK messages: -  BTS.AckSendPortName can be used to identify the original send port. - BTS.AckOwnerID, aka http://schemas.microsoft.com/BizTalk/2003/system-properties.AckOwnerID - holds the instance ID of the failed Send Port - can be used to resubmit / terminate the instance Someone may ask, can we just turn off the Delivery Notification on a 'normal' port, and set the AckRequired property on the message as for a Direct port. Unfortunately, this does not work - BizTalk seems to remove this property automatically, if the message goes through a port where Delivery Notification is set to None.

    Read the article

  • Issues with LVM partition size in Server 13.04

    - by Michael
    I am new to ubuntu and a little confused about how hard drive partitions and LVM works. I remember setting up Ubuntu server 13.04 and telling to to use 1TB of a 3TB server. Well I have maxed that out with blu-ray rips and want the rest of the drive for space. On log-in it says: System load: 2.24 Processes: 179 Usage of /: 88.7% of 912.89GB Users logged in: 0 Memory usage: 6% IP address for p5p1: 192.168.0.100 Swap usage: 0% => / is using 88.7% of 912.89GB lvdisplay outputs: --- Logical volume --- LV Path /dev/DeathStar-vg/root LV Name root VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 1 LV Size 2.70 TiB Current LE 707789 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/DeathStar-vg/swap_1 LV Name swap_1 VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 2 LV Size 3.75 GiB Current LE 959 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 vgdisplay outputs: VG Name DeathStar-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715335 Alloc PE / Size 708748 / 2.70 TiB Free PE / Size 6587 / 25.73 GiB df outputs: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/DeathStar--vg-root 957238932 848972636 59634696 94% / none 4 0 4 0% /sys/fs/cgroup udev 1864716 4 1864712 1% /dev tmpfs 374968 1060 373908 1% /run none 5120 4 5116 1% /run/lock none 1874824 148 1874676 1% /run/shm none 102400 24 102376 1% /run/user /dev/sda2 234153 56477 165184 26% /boot And fdisk /dev/sda -l outputs: Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. I just don't know what to make of all this and am not sure how I can make it use all 2.73TBs. Thanks in advance for any help. EDIT-- Yes I did make changes to the LVM Config, but it didnt do anything. As requested, output of parted -l /dev/sda Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 258MB 256MB ext2 3 258MB 3001GB 3000GB lvm Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-swap_1: 4022MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 4022MB 4022MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-root: 2969GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2969GB 2969GB ext4

    Read the article

  • Oracle GoldenGate 11gR2 New Feature: Integrated Capture

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} With the release of Oracle GoldenGate 11gR2, the Product Management team is very excited about the addition of Integrated Capture for the Oracle platform. Integrated capture is unique in the industry and unique to the Oracle database. It is not available on any other database platform. This new feature moves GoldenGate’s capture capabilities closer to the Oracle Database engine and is the foundation for Oracle GoldenGate on the Oracle Database platform over the long term. It is important to note that Integrated Capture does not replace our classic Capture process. Both are available on the Oracle Database platform. The Integrated Capture mechanism relies on Oracle’s internal log parsing and processing to capture DML transactions. By moving closer to the Oracle Database engine, Oracle GoldenGate can take advantage of new Oracle Database features and functionality more quickly. For example, this new mechanism allows GoldenGate to support advanced features such as compression. Integrated Capture provides support for all flavors of Oracle compression, including hybrid columnar compression (EHCC) on Exadata, where as our “Classic” capture would not. Integrated Capture supports two different deployment configurations; On-Source and Downstream. The on-source deployment model is what most customers are familiar with. Oracle GoldenGate is executing on the database server capturing changes in real time. This is the default deployment method. The other option is downstream, where the source database and the Oracle GoldenGate Capture process are on different machines. This method effectively off-loads the processing requirements to a second machine. Customers may choose which option they prefer based on their requirements.   Additional information on Integrated Capture can be found in our documentation and the white paper “Oracle GoldenGate for Oracle”.

    Read the article

  • Tomcat Manager Application and HTTP 404 Error

    - by David
    I am trying to set up the admin application for a Tomcat 6.0.24 instance. None of the searches I've done turn up anything I can use. I am using this configuration for Apache 2.2.14: Alias /manager /usr/share/tomcat6-admin/manager <Directory "/usr/share/tomcat6-admin/manager"> Options Indexes FollowSymLinks AllowOverride None allow from all </Directory> ProxyPass /manager ajp://localhost:8009/manager In the tomcat-users.xml I have this: <tomcat-users> <role rolename="tomcat"/> <role rolename="admin"/> <role rolename="operator"/> <role rolename="manager"/> <user username="admin" password="nopasswordforyou" roles="admin,tomcat,manager"/> <user username="operator" password="nevermind" roles="operator"/> </tomcat-users> I found the docs that suggested I needed manager-gui role installed and defined, but that appears to be Tomcat 7, not Tomcat 6. The manager.xml is the default provided with Ubuntu Lucid Lynx 10.04: <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> If I access /manager from a web browser, I get a 404 error from Tomcat: "requested resource not available." If I access /manager/images I get the same thing. If I access /manager/401.jsp I get the actual page. In addition, the server.xml has not only the usual Realm (UserDatabaseRealm) but also one for MySQL authentication (JDBCRealm). Investigating this showed that the role of manager was not present there for the user admin; I fixed that by doing: INSERT USER_ROLE_DB SET USER_NAME='admin', ROLE_NAME='manager'; I restarted Tomcat, although I suspect that was overkill. No change. I don't see any errors in catalina.out or in localhost.* log files. What am I missing? What is the interaction between the different realms? How do I get the manager application working?

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • `make install` fails apparently due to typo, but not in makefile: How to find and fix?

    - by Archelon
    I'm trying to install the fujitsu-usb-touchscreen drivers from here, on Kubuntu 12.04 on my new Fujitsu LifeBook P1630. (See fujitsu-usb-touchscreen on kubuntu 13.04 (64-bit) on P1630: `make` errors.) I downloaded the .zip file, unzipped it, and ran make in the directory thus created; this all worked as expected. However, when I run sudo checkinstall (which invokes make install), things go less well. On the first attempt the installation aborted with the following error: make: execvp: /etc/init.d/fujitsu_touchscreen: Permission denied make: *** [install] Error 127 I eventually resolved this by $ sudo chmod +x /etc/init.d/fujitsu_touchscreen But although a second sudo checkinstall then does not give the execvp error, it still fails at a later stage, and the log (on stdout) shows this dpkg error: dpkg: error processing /home/archelon/fujitsu-touchscreen-driver/cybergene-fujitsu-usb-touchscreen-112fdb75b406/cybergene-fujitsu-usb-touchscreen-112fdb75b406_amd64.deb (--install): unable to create `/sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy.dpkg-new' (while processing `/sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy'): No such file or directory And, indeed, there is no /sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy; there is, however, /sys/module/fujitsu_usb_touchscreen/parameters/touch_maxy, and this is presumably what was intended. But this incorrect filename does not appear in the makefile or any other file in the directory, at least not that I can find. Nor does it appear, as I discovered after running sudo checkinstall --install=no as suggested below, in the .deb package created by checkinstall. Where might such a typographical error be originating, and how would I go about fixing it? Edited to add: I'm viewing the contents of the .deb file with ark, Kubuntu's default tool. It contains only three files: control.tar.gz, data.tar.gz, and debian-binary. data.tar.gz contains the directory tree that appears to match up to the usual root filesystem, with /etc, /lib, /sys, and /usr directories. (Looking at other .deb files on my system, this structure appears to be typical.) Here's a screenshot: . (Full size.) Here's another screenshot showing that control.tar.gz contains three files, one of which is empty: . (Full size.) Here's the actual .deb file: https://www.dropbox.com/s/odwxxez0fhyvg7a/cybergene-fujitsu-usb-touchscreen_112fdb75b406-1_amd64.deb Edited 2013-09-28 to add: After reinstalling Kubuntu 12.04 again, this time recreating the /home partition (which, again, had been generated during an install of 13.04), I can no longer reproduce this error. I am still curious to know how the underscores got changed to slashes, but it looks as though nobody has any idea. It is perhaps also of interest to note that while I have still not successfully run checkinstall against this package, I have done make install; it requires the executabilization of /etc/init.d/fujitsu_touchscreen and the installation of hal, and the GUI freezes shortly after installation completes, and there is no particular new functionality afterwards that I have noticed, and the system can no longer resume from being suspended; however, this will be pursued elsewhere.

    Read the article

  • How to Make Objects Fall Faster in a Physics Simulation

    - by David Dimalanta
    I used the collision physics (i.e. Box2d, Physics Body Editor) and implemented onto the java code. I'm trying to make the fall speed higher according to the examples: It falls slower if light object (i.e. feather). It falls faster depending on the object (i.e. pebble, rock, car). I decided to double its falling speed for more excitement. I tried adding the mass but the speed of falling is constant instead of gaining more speed. check my code that something I put under input processor's touchUp() return method under same roof of the class that implements InputProcessor and Screen: @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { // TODO Touch Up Event if(is_Next_Fruit_Touched) { BodyEditorLoader Fruit_Loader = new BodyEditorLoader(Gdx.files.internal("Shape_Physics/Fruity Physics.json")); Fruit_BD.type = BodyType.DynamicBody; Fruit_BD.position.set(x, y); FixtureDef Fruit_FD = new FixtureDef(); // --> Allows you to make the object's physics. Fruit_FD.density = 1.0f; Fruit_FD.friction = 0.7f; Fruit_FD.restitution = 0.2f; MassData mass = new MassData(); mass.mass = 5f; Fruit_Body[n] = world.createBody(Fruit_BD); Fruit_Body[n].setActive(true); // --> Let your dragon fall. Fruit_Body[n].setMassData(mass); Fruit_Body[n].setGravityScale(1.0f); System.out.println("Eggs... " + n); Fruit_Loader.attachFixture(Fruit_Body[n], Body, Fruit_FD, Fruit_IMG.getWidth()); Fruit_Origin = Fruit_Loader.getOrigin(Body, Fruit_IMG.getWidth()).cpy(); is_Next_Fruit_Touched = false; up = y; Gdx.app.log("Initial Y-coordinate", "Y at " + up); //Once it's touched, the next fruit will set to drag. if(n < 50) { n++; }else{ System.exit(0); } } return true; } And take note, at show() method , the view size from the camera is at 720x1280: camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 1280; camera_1.viewportWidth = 720; camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); I know it's a good idea to add weight to make the falling object falls faster once I released the finger from the touchUp() after I picked the object from the upper right of the screen but the speed remains either constant or slow. How can I solve this? Can you help?

    Read the article

  • Vostro 3560 Wireless on Ubuntu 12.10

    - by ngille
    I have been using Ubuntu 12.04.1 LTS on a Dell Vostro 3560 for some time now. At first I had issues with the wireless but I was pointed to a driver from Ubuntu 11.xx for a Broadcom BCM43142 and that did work! however with the upgrade to 12.10 I find myself at a lost because the driver will not work anymore. When trying to reinstall it I get "Driver of bad quality" I still install it but it just wont work. I cant seem to find anything on this issue for 12.10 and didn't know if anyone had any advice or if someone could point me in the right direction on resources to write my own driver for it. As I know C/C++ but have never written a driver for a Linux machine before. Thanks for any help! To be more clear I installed Ubuntu 12.10 Desktop on my Dell Vostro 3560 laptop. When I log in, my wireless card isn't visible in the Network Manager popup menu, although the wired network shows up there. I installed the driver mentioned below and that did help me on 12.04 but is now broken due to "Poor quality" a sudo lspci -nn command brings up: 00:00.0 Host bridge [0600]: Intel Corporation 3rd Gen Core processor DRAM Controller [8086:0154] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) 00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04) 00:16.0 Communication controller [0780]: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 [8086:1e3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 [8086:1e10] (rev c4) 00:1c.1 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 2 [8086:1e12] (rev c4) 00:1c.2 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 3 [8086:1e14] (rev c4) 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation HM77 Express Chipset LPC Controller [8086:1e57] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] [8086:1e03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller [8086:1e22] (rev 04) 01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 07) **02:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01)**

    Read the article

  • 12.04 boots fine, with graphical splash screen, but then Monitor "out of range"

    - by Jim Bednar
    I see dozens of posts from people whose monitors are saying "out of range" under Ubuntu; seems like there are some serious problems in Ubuntu with autodetection of monitor capabilities. :-( But none of the many, many suggestions I found have solved my problem, and right now I can't use anything graphical on this machine! History: I installed Ubuntu 12.04 on my HP Proliant Microserver N40L, which worked reasonably at the default resolution across several reboots. At some point I noticed that the proprietary video driver was not in use, and tried to install one to get better window-drawing speeds, but it failed with some sort of error, and I gave up on that. A few weeks later when I next rebooted, it showed the usual BIOS screen and various boot loading screens (including GRUB), and then the usual purple Ubuntu splash screen with the dots showing that things were loading, but when it finished booting the monitor went black and eventually showed "Out of range" (with no other information). Given that there were several weeks between reboots (it's a server, after all), I've no idea if it was some system update, trying to install the proprietary drivers, or something else that caused the problem. Anyway, the system has booted fine, as I can do Ctrl-Alt-F1 to get a text prompt and can log in there. But Ctrl-Alt-F7 goes back to the out of range error. Some posters said to try Ctrl-Alt-- (minus) to cycle through resolutions until one works, but that didn't have any visible effect. Many, many others said it was a grub problem, which seems unlikely given that grub's screen looks fine, but I tried editing /etc/default/grub to set a particular resolution (trying many of them) and running update-grub, with no apparent effect. Rebooting into failsafe mode works the same as regular mode. Replacing xorg.conf with xorg.conf.failsafe works the same too. I'm at my wits' end! Isn't there anything I can do to convince Ubuntu to choose a mode that the monitor supports? E.g. the one that it is using for the splash screen? I don't need great resolution on this machine, just anything that works!!!!! Help!!!!!! Please!!!!

    Read the article

  • The Solution

    - by Patrick Liekhus
    So I recently attended a class about time management as well as read the book “The Seven Habits of Highly Effective People” by Stephen Covey.  Both have been instrumental in helping me get my priorities aligned as well as keep me focused. The reason I bring this up is that it gave me a great idea for a small application with which to create a great technical stack solution that would be easy to demo and explain.  Therefore, the project from this point forward with be the Liekhus.TimeTracker application which will bring some the time management skills that I have acquired into a technical implementation.  The idea is rather simple, but leverages some of the basic principles of Covey along with some of the worksheets that I garnered from class.  The basics are as such: 1) a plan is a must have and 2) write it down!  A plan not written down is just an idea.  How many times have you had an idea that didn’t materialize?  Exactly.  Hence why I am writing it all down now! The worksheet consists of a few simple columns that I will outline below as well as some modifications that I made according to the Covey habits.  The worksheet looks like the following: Status Issue Area CQ Notes P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   The idea is really simple and straightforward; you write down all your tasks and keep track of them along the way.  The status stands for (P)ending, (F)inished or (L)ater.  You write a quick title for the issue and select the CQ (Covey Quadrant) with which the issue occurs.  The notes section is for things that happen while you are working through the issue.  And last, but not least, is the Area column that I added as a way to identify the Role or Area of your life that this task falls within based upon Covey’s teachings. The second part of this application is a simple phone log that allows you to track your phone conversations throughout the day.  All of this is currently done on a sheet of paper, but being involved in technology, I want it to have bells and whistles.  Therefore, this is my simple idea for a project that will allow me to test my theories about coding and implementations.  Stay tuned as the next session will be flushing out the concept and coming up with user stories to begin the SCRUM process. Thanks

    Read the article

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Ubuntu Server and setting up two nic cards

    - by kmalik
    I have ubuntu server on a computer with a wireless and hardwired nic card. The wireless needs to get the internet and pass it to the ubuntu server as well as pass it along to the hardwired nic card to more computers. I am having issues getting the basic set up as I believe the route table is grabbing from the wrong nic card. The router is 192.168.1.0 and the server is set to 192.168.1.11 on the wireless card through DHCP ETH0 (wired nic card) is set up to be 10.10.10.0 and the server is 10.10.10.1) I am not a linux or networking guru but basically I am trying to have internet come from a guest network 192.168.1.0 i believe to give internet to the ubuntu server then the ubuntu server will also A) have the wired nic serve DHCP addresses to other computers via a switch or router (that acts as a switch) via 10.10.10.0 addresses. And I would love if it also passed along internet capabilities as well if possible. Bu really at this point my hope is to at least get the internet working on the server and the DHCP to pass correctly. At the moment the specific issue I am having is getting ubuntu server to connect to the internet and have both nic cards up and running correctly. Any help would be appreciated! The route table is as follows: Destination Gateway GM Flags Metric Iface 0.0.0.0 10.10.10.1 0.0.0.0 UG 100 eth0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 eth0 1992.168.1.0 0.0.0.0 255.255.255.255.0 U 0 eth1 My interfaces is set up as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.10.1 netmask 255.255.255.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.1 domain-name-servers 192.168.1.0 auto eth1 iface eth1 inet dhcp netmask 255.255.255.0 gateway 192.168.1.0 wpa-driver wext wpa-ssid "ssid_name" wpa-ap-scan 1 wpa-proto wpa wpa-pairwise ccmp wpa-group ccmp wpa-key-mgmt wpa-psk wpa-psk "HASH" My DHCPD.conf (as there is a domain name server on here is as follows): ddns-update-style none default-lease-time 600 max-lease-time 7200 authoritative option domain-name "Kamron's Network" option subnet-mask 255.255.255.0 option broadcast-address 10.10.10.255 option routers 192.168.1.0 option domain-name-server 192.168.1.0 98.223.128.213 ooption subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.99 } log-facility local7

    Read the article

  • Database Security: The First Step in Pre-Emptive Data Leak Prevention

    - by roxana.bradescu
    With WikiLeaks raising awareness around information leaks and the harm they can cause, many organization are taking stock of their own information leak protection (ILP) strategies in 2011. A report by IDC on data leak prevention stated: Increasing database security is one of the most efficient and cost-effective measures an organization can take to prevent data leaks. By utilizing the data protection, access control, account management, encryption, log management, and other security controls inherent in the database management system, entities can institute first-level control over the widest range of protected information. As a central repository for unstructured data, which is growing at leaps and bounds, the database should be the first layer providing information leakage protection. Unfortunately, most organizations are not taking sufficient steps to protect their databases according to a survey of the Independent Oracle User Group. For example, any operating system administrator or database administrator can access the all the data stored in the database in most organizations. Without any kind of auditing or monitoring. And it's not just administrators, database users can typically access the database with ad-hoc query tools from their desktop and by-pass any application level controls. Despite numerous regulations calling for controls to limit the powers of insiders, most organizations still put too many privileges in the hands of their employees. Time and time again these excess privileges have backfired. Internal agents were implicated in almost half of data breaches according to the Verizon Data Breach Investigations Report and the rate is rising. Hackers also took advantage of these excess privileges very successfully using stolen credentials and SQL injection attacks. But back to the insiders. Who are these insiders and why do they do it? In 2002, the U.S. Secret Service (USSS) behavioral psychologists and CERT information security experts formed the Insider Threat Study team to examine insider threat cases that occurred in US critical infrastructure sectors, and examined them from both a technical and a behavioral perspective. A series of fascinating reports has been published as a result of this work. You can learn more by watching the ISSA Insider Threat Web Conference. So as your organization starts to look at data leak prevention over the coming year, start off by protecting your data at the source - your databases. IDC went on to say: Any enterprise looking to improve its competitiveness, regulatory compliance, and overall data security should consider Oracle's offerings, not only because of their database management capabilities but also because they provide tools that are the first layer of information leak prevention. Learn more about Oracle Database Security solutions and get the whitepapers, demos, tutorials, and more that you need to protect data privacy from internal and external threats.

    Read the article

  • Should Exterrnal USB hard drive auto mount Ubuntu 12.04.01 LTS

    - by Chris Good
    I want to have external USB hard drives automatically mounted when plugged in. I have 2 drives exactly the same except for volume label. They both have the same UUID. I want to be easily able to swap them as I'm using them for backups and want to keep 1 at home for off site backup. I've set up the /etc/fstab so they should mount at different places based on their volume label: /etc/fstab : LABEL=Passport1 /media/Passport1 ntfs defaults,windows_names,locale=en_US.utf8 0 0 LABEL=Passport2 /media/Passport2 ntfs defaults,windows_names,locale=en_US.utf8 0 0 blkid shows : ... /dev/sdc1: LABEL="Passport2" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" /dev/sdd1: LABEL="Passport1" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" They both mount automatically during reboot but do not mount when just plugged in to a running system. I've read lots of stuff about this, much of it is old so I'm not sure if it applies. I've read some stuff that says the mounts should happen automatically when plugged in, and lots of other stuff that says you have to install other software to make this happen, although much of it just seems to set up the fstab. What's the real story? Here is /var/log/syslog when drive is plugged in: Dec 14 11:22:58 ausyvutims1 kernel: [66221.300196] usb 1-1: new high-speed USB device number 6 using ehci_hcd Dec 14 11:22:58 ausyvutims1 mtp-probe: checking bus 1, device 6: "/sys/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1" Dec 14 11:22:58 ausyvutims1 mtp-probe: bus: 1, device: 6 was not an MTP device Dec 14 11:22:58 ausyvutims1 kernel: [66221.656020] scsi7 : usb-storage 1-1:1.0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.661534] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.666466] scsi 7:0:0:1: Enclosure WD SES Device 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667739] sd 7:0:0:0: Attached scsi generic sg3 type 0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667913] ses 7:0:0:1: Attached Enclosure device Dec 14 11:22:59 ausyvutims1 kernel: [66222.668047] ses 7:0:0:1: Attached scsi generic sg4 type 13 Dec 14 11:22:59 ausyvutims1 kernel: [66222.678473] sd 7:0:0:0: [sdc] 1953458176 512-byte logical blocks: (1.00 TB/931 GiB) Dec 14 11:22:59 ausyvutims1 kernel: [66222.687700] sd 7:0:0:0: [sdc] Write Protect is off Dec 14 11:22:59 ausyvutims1 kernel: [66222.687705] sd 7:0:0:0: [sdc] Mode Sense: 47 00 10 08 Dec 14 11:22:59 ausyvutims1 kernel: [66222.701076] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.701081] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.738062] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.738068] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.754558] sdc: sdc1 Dec 14 11:22:59 ausyvutims1 kernel: [66222.792006] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.792012] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.792016] sd 7:0:0:0: [sdc] Attached SCSI disk Dec 14 11:22:59 ausyvutims1 ata_id[16971]: HDIO_GET_IDENTITY failed for '/dev/sdc': Invalid argument Thanks for any help offered

    Read the article

  • Help with Collision of spawned object(postion fixed) with objects that there are translating on screen

    - by Amrutha
    Hey guys I am creating a game using Corona SDK and so coding it in Lua. So there are 2 separate functions, To translate the hit objects and change their color when they are tapped The link below is the code I am using to for the first function http://developer.anscamobile.com/sample-code/fishies Spawn objects that will hit the translating objects on collision. Alos on collision the spawned object disappears and the translating object bears a color(indicating the collision). In addition the size of this spawned object is dependent on i/p volume level. The function I have written is as follows, --VOICE INPUT CODE local r = media.newRecording() r:startRecording() r:startTuner() --local function newBar() -- local bar = display.newLine( 0, 0, 1, 0 ) -- bar:setColor( 0, 55, 100, 20 ) -- bar.width = 5 -- bar.y=400 -- bar.x=20 -- return bar --end local c1 = display.newImage("str-minion-small.png") c1.isVisible=false local c2 = display.newImage("str-minion-mid.png") c2.isVisible=false local c3 = display.newImage("str-minion-big.png") c3.isVisible=false --SPAWNING local function spawnDisk( event ) local phase = event.phase local volumeBar = display.newLine( 0, 0, 1, 0 ) volumeBar.y = 400 volumeBar.x = 20 -- volumeBar.isVisible=false local v = 20*math.log(r:getTunerVolume()) local MINTHRESH = 30 local LEFTMARGIN = 20 local v2 = MINTHRESH + math.max (v, -MINTHRESH) v2 = (display.contentWidth - 1 * LEFTMARGIN ) * v2 / MINTHRESH volumeBar.xScale = math.max ( 20, v2 ) local l = volumeBar.xScale local cnt1 = 0 local cnt2 = 0 local cnt3 = 0 local ONE =1 local val = event.numTaps --local px=event.x --local py=event.y if "ended" == phase then --audio.play( popSound ) --myLabel.isVisible = false if l > 50 and l <=150 then -- c1:setFillColor(10,105,0) -- c1.isVisible=false c1.x=math.random( 10, 450 ) c1.y=math.random( 10, 300 ) physics.addBody( c1, { density=1, radius=10.0 } ) c1.isVisible=true cnt1= cnt1+ ONE return c1 elseif l > 100 and l <=250 then --c2:setFillColor(200,10,0) c2.x=math.random( 10, 450 ) c2.y=math.random( 10, 300 ) physics.addBody( c2, { density=2, radius=9000.0 } ) c2.isVisible=true cnt2= cnt2+ ONE return c2 elseif l >=250 then c3.x=math.random( 40, 450 ) c3.y=math.random( 40, 300 ) physics.addBody( c3, { density=2, radius=7000.0 , bounce=0.0 } ) c3.isVisible=true cnt3= cnt3+ ONE return c3 end end end buzzR:addEventListener( "touch", spawnDisk ) -- touch the screen to create disks Now both functions work fine independently but there is no collision happening. Its almost as if the translating object and the spawn object are on different layers. The translating object passes through the spawn object freely. Can anyone please tell me how to resolve this problem. And how can I get them to collide. Its my first attempt at game development, that too for a mobile platform so would appreciate all help. Also if I have not been specific do let me know. I ll try to frame the query better :). Thanks in advance.

    Read the article

  • Questions about identifying the components in MVC

    - by luiscubal
    I'm currently developing an client-server application in node.js, Express, mustache and MySQL. However, I believe this question should be mostly language and framework agnostic. This is the first time I'm doing a real MVC application and I'm having trouble deciding exactly what means each component. (I've done web applications that could perhaps be called MVC before, but I wouldn't confidently refer to them as such) I have a server.js that ties the whole application together. It does initialization of all other components (including the database connection, and what I think are the "models" and the "views"), receiving HTTP requests and deciding which "views" to use. Does this mean that my server.js file is the controller? Or am I mixing code that doesn't belong there? What components should I break the server.js file into? Some examples of code that's in the server.js file: var connection = mysql.createConnection({ host : 'localhost', user : 'root', password : 'sqlrevenge', database : 'blog' }); //... app.get("/login", function (req, res) { //Function handles a GET request for login forms if (process.env.NODE_ENV == 'DEVELOPMENT') { mu.clearCache(); } session.session_from_request(connection, req, function (err, session) { if (err) { console.log('index.js session error', err); session = null; } login_view.html(res, user_model, post_model, session, mu); //I named my view functions "html" for the case I might want to add other output types (such as a JSON API), or should I opt for completely separate views then? }); }); I have another file that belongs named session.js. It receives a cookies object, reads the stored data to decide if it's a valid user session or not. It also includes a function named login that does change the value of cookies. First, I thought it would be part of the controller, since it kind of dealt with user input and supplied data to the models. Then, I thought that maybe it was a model since it dealt with the application data/database and the data it supplies is used by views. Now, I'm even wondering if it could be considered a View, since it outputs data (cookies are part of HTTP headers, which are output)

    Read the article

  • Ubuntu 12.04.3 Graphics Issues: Broken Pipes, Reinstalled Xorg and Bumblebee

    - by user190488
    It seems I have a problem, and am only making it worse by following what I find online. I have a new Asus N550JV-D71 (not sure about the part after the dash, though I definitely know it includes 71). I decided to downgrade Windows 8 to 7, then dual boot Ubuntu 12.04 with it (there were issues with Windows 8, and I had a Windows 7 disk handy). It did work and, after installing Bumblebee in tty (because it wouldn't boot when it was first installed), it worked marvelously for a little less than a week. However, I restarted it last night and got the Could not write bytes: Broken pipes error. (I see it's a very common error, but I've looked at the majority of the suggested Similar Questions already.) I followed what I could find online, followed those instructions (making sure to not install any sort of graphics drivers other than what Bumblebee provides), and it just seems to go further and further downhill. I'm afraid I didn't write the exact steps to get to this point (it was late by the time I gave up the night before), but it involved reinstalling lightdm, xorg (and xserver?), and Bumblebee. I then changed the Bumblebee.conf file so that Device=nvidia. I'm pretty new to Linux in general (I've used it since 10.04, but I hadn't had issues up until this computer, so it let me stay a newbie), so I'm not exactly sure what log files to look at to find the errors to look up. However, I did look at lshw and noticed that displays was marked as unassigned. Also, if I try to start lightdm using the command line, it always stops at Stopping Mount network filesystems. I should note that there isn't an xorg.conf file, and no .Xauthority. I would really, really prefer not to reinstall 12.04 if possible. I managed to get grub to display only a short time ago, and I can't boot to the dvd drive unless I go into the BIOS settings and manually change the boot order (that was an issue from the beginning, before the Ubuntu install), and getting into those settings often means rebooting several times due to the fact that the window to get to it is extremely small. I have most of what I need backed up, however, in case it does get to that point. If I really have to, I can just use the latest Ubuntu version instead of the LTS, but the reason I chose 12.04 in the first place is because I need something stable-ish, and Windows isn't suitable to what I need to do. I should note that the reason I restarted last night in the first place was that it wasn't charging the battery, and the wifi kept on going out. Hardware: Nvidia GeForce GT 750M Intel HD graphics 4600

    Read the article

  • How do I stop icons appearing on the desktop under conky?

    - by Seamus
    When I download something to my desktop, or insert a CD or flash drive, the icon appears on my desktop. When I have conky running, the icon sometimes appears in the top right corner, underneath conky; where I can't see it. How do I stop this happening? My .conkyrc is pasted below. I didn't write it all myself, so I'm not entirely sure what I need to change, or what parts are relevant for this particular question... # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type override own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont DejaVu Sans:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left alignment top_right #alignment bottom_left #alignment bottom_right # Gap between borders of screen and text gap_x 10 gap_y 20 # stuff after 'TEXT' will be formatted on screen TEXT $color ${color orange}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine ${color orange}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${acpitemp} $cpubar ${cpugraph 000000 ffffff} NAME ${goto 150}PID ${goto 200}CPU% ${goto 250}MEM% ${top name 1} ${goto 150}${top pid 1} ${goto 200}${top cpu 1} ${goto 250}${top mem 1} ${top name 2} ${goto 150}${top pid 2} ${goto 200}${top cpu 2} ${goto 250}${top mem 2} ${top name 3} ${goto 150}${top pid 3} ${goto 200}${top cpu 3} ${goto 250}${top mem 3} ${top name 4} ${goto 150}${top pid 4} ${goto 200}${top cpu 4} ${goto 250}${top mem 4} ${color orange}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Home: ${fs_free_perc /home}% ${fs_bar 6 /}$color Free Space: ${fs_free /home} ${color orange}NETWORK (${addr eth0}) ${hr 2}$color Down: $color${downspeed eth0} k/s ${alignr}Up: ${upspeed eth0} k/s ${downspeedgraph eth0 25,140 000000 ff0000} ${alignr}${upspeedgraph eth0 25,140 000000 00ff00}$color Total: ${totaldown eth0} ${alignr}Total: ${totalup eth0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color orange}WIRELESS (${addr wlan0}) ${hr 2}$color Down: $color${downspeed wlan0} k/s ${alignr}Up: ${upspeed wlan0} k/s ${downspeedgraph wlan0 25,140 000000 ff0000} ${alignr}${upspeedgraph wlan0 25,140 000000 00ff00}$color Total: ${totaldown wlan0} ${alignr}Total: ${totalup wlan0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr}

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >