Search Results

Search found 65028 results on 2602 pages for 'two way object databindin'.

Page 112/2602 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Two different subwoofers aren't working on my machine or my phone

    - by Philluminati
    I have speakers that come with my computer. Two small desktop speakers and a subwoofer with a base volume control on the back. It's worked for years. I was listening to Spotify on my speakers as loud they would possibly go and with the base turned up to max and suddenly the subwoofer stopped working. I've plugged the speakers into my Android HTC Desire Z handset and again, the desktop speakers play music but the subwoofer doesn't (even after fiddling with the volume control). So I figured I'd broken it. I went to Amazon and bought a replacement one. I bought this one: http://www.amazon.co.uk/dp/B002N46YD8/ref=pe_217191_31005151_dp_1 but it doesn't work either, on either my desktop nor my Android phone. I had a play with alsamixer and the LFE and center controls are switched on and the speakers are okay... but still no base. Am I unlucky enough to bought a new subwoofer which is already broken out of the box or is there something else which is wrong and I could look into please? Are there any other tests which I could perform to see if the problem is me or not?

    Read the article

  • Inkscape: Copying an object, retaining transparency

    - by dpk
    I'm looking for a way to copy objects from one window to another without losing the surrounding transparency. I have two Inkscape windows. The setup is pretty simple. In the first window I draw a filled circle and a filled rectangle in it, with the circle set on top of the rectangle to show that the area around the circle is transparent (that is, you can see the rectangle "under" the circle, see screenshot 1, left). In the second window I just drew a filled rectangle (screenshot 1, right). When I copy the circle from window 1 to window 2 the transparency around the circle is lost (screenshot 2). I've verified that the backgrounds of the documents are 0% alpha/white. This is a rather contrived example but is readily reproducible. The real graphics I am working with have a bunch of objects all in a single group, but I have the same results. I feel like I'm missing something. The circle no longer behaves like a circle at its destination. Instead, it acts kind of like a bitmap. I'm definitely not using the bitmap copy feature.

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • Two monitors with Mac Mini - one displays black despite receiving a signal

    - by alex
    My Mac Mini outputs to my two new monitors - Dell U2311Hs. The LED on the bezel displays blue when receiving a signal, or yellow otherwise. Both screens are displaying blue. It also seems my Mini can see both of them... However, one of them is black. It just displays black, but appears to be receiving a signal (when I turn the Mac off, it then displays No Signal). To make things weirder, on startup, the boot up (white with Apple logo) appears on the right monitor (the one that now displays black). Occasionally, it flickers up on the black screen for 1 second. I have tried Detect Displays. It appears to do nothing. I'm also running a dual monitor KVM. Video connections are DVI-D. How can I fix this situation? Thanks. Update This is the weirdest thing - I used the DVI-D cable that came with the KVM and it seems to have fixed it - I didn't both because it looks identical to any other DVI cable (in form an pin out). So, I will accept an answer if someone can tell me what may be the difference in these cables?

    Read the article

  • Two VGA monitors on Lenovo IdeaCentre H520 Desktop

    - by Sebastian-Laurentiu Plesciuc
    I recently bought a Lenovo IdeaCentre H520 computer and two VGA LED monitors. This particular PC has a dedicated NVIDIA Geforce GT630 video card and an integrated Intel HD Graphics 2500 video card. Both cards have VGA out. The Geforce card also has a HDMI out. I have installed Windows 8 and I can't seem to use both cards. I have connected both monitors, one to the VGA out of the Geforce card and one to the VGA out of the integrated card. I looked through the BIOS options for Video and I can only select the dedicated one, the integrate one or the Auto option. This kinda sucks. I was wondering what kind of options I have available. I have a VGA female to DVI A male adaptor, I was wondering if it could work if I can hook it to a DVI A female to HDMI male adaptor and plug one monitor into the VGA out of the Geforce video card and the other through both adapters to the HDMI out. Any chance this could work? I was looking online for a VGA to HDMI live cable but it's kind of expensive.

    Read the article

  • Connect two networks

    - by Meek Barrios
    Connecting two different offices with a wireless link and linux boxes. Hardware: 2 CISCO RV42, 2 Dual Homed Linux Boxes running debian, 2 2Wire and 2 AirMax 5 Configuration is: Office A LAN A (10.1.1.0/24) -> RV42 A (WAN1 - 10.1.1.254) -> 2Wire A (Internet) LINUX A ( ETH0 (LAN) 10.1.1.253, ETH1 (LINK) (10.1.3.3) Wireless Link --- AirMax A <-> AirMax B connected as Wireless Bridge Office B LAN B (10.1.2.0/24) -> RV42 B (WAN1 - 10.1.2.254) -> 2Wire B (Internet) LINUX B ( ETH0 (LAN) 10.1.2.253 -> ETH1 (LINK) (10.1.3.4) Network configuration is: LAN A - Default Gateway 10.1.1.254 RV42 A - Static Route 10.1.3.0/24 on 10.1.1.253 Static Route 10.1.2.0/24 on 10.1.1.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux A - ETH0 10.1.1.253 netmask 255.255.255.0 gw 10.1.1.254 ETH1 10.1.3.3 netmask 255.255.255.0 gw 10.1.3.1 AIRMAX A - 10.1.3.1 netmask 255.255.255.0 gw 10.1.3.1 LAN B - Default Gateway 10.1.2.254 RV42 B - Static Route 10.1.3.0/24 on 10.1.2.253 Static Route 10.1.1.0/24 on 10.1.2.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux B - ETH0 10.1.2.253 netmask 255.255.255.0 gw 10.1.2.254 ETH1 10.1.3.4 netmask 255.255.255.0 gw 10.1.3.2 AIRMAX B - 10.1.3.2 netmask 255.255.255.0 gw 10.1.3.2 Both linux have ip_forward set to 1 and the following on the iptables: iptables -F iptables -X iptables -P FORWARD ACCEPT iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT I can ping from Linux B any ip on 10.1.1.0/24 segment and on linux A any ip on 10.1.2.0/24 segment however I cannot connect to HTTP or FTP on those machines. From LAN A I cannot see any other network. I'm looking for some advice for this configuration or a better solution. Regards

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • trying to route between two openvpn clients

    - by user42055
    I have two openvpn clients on the 10.0.1.0 (client1) and 192.168.0.0 (client2) subnets with the server's openvpn connection having the ip 192.168.150.1 The server has ip forwarding enabled. Currently, client1's vpn ip is 192.168.150.10 and the P-t-P ip is 192.168.150.9 I have created the following static route on client1: route add -net 10.0.1.0 netmask 255.255.255.0 gw 192.168.150.9 The routing table on client1 looks like this: Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 192.168.150.1 192.168.150.9 255.255.255.255 UGH 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 I thought this would be correct to allow traffic from client1 to reach computers on client2's network, but it does not work. Is 192.168.150.9 (the P-t-P address) the correct one to be routing through ? I tried using 192.168.150.1 but I couldn't create the route. I hope this is clear.

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :) Regards, www.TechProceed.com

    Read the article

  • Preventing endless forwarding with two routers

    - by jarmund
    The network in quesiton looks basically like this: /----Inet1 / H1---[111.0/24]---GW1---[99.0/24] \----GW2-----Inet2 Device explaination H1: Host with IP 192.168.111.47 GW1: Linux box with IPs 192.168.111.1 and 192.168.99.2, as well as its own route to the internet. GW2: Generic wireless router with IP 192.168.99.1 and its own route to the internet. Inet1 & Inet2: Two possible routes to the internet In short: H has more than one possible route to the internet. H is supposed to only access the internet via GW2 when that link is up, so GW1 has some policy based routing special just for H1: ip rule add from 192.168.111.47 table 991 ip route add default via 192.168.99.1 table 991 While this works as long as GW2 has a direct link to the internet, the problem occurs when that link is down. What then happens is that GW2 forwards the packet back to GW1, which again forwards back to GW2, creating an endless loop of TCP-pingpong. The preferred result would be that the packet was just dropped. Is there something that can be done with iptables on GW1 to prevent this? Basically, an iptables-friendly version of "If packet comes from GW2, but originated from H1, drop it" Note1: It is preferable not to change anything on GW2. Note2: H1 needs to be able to talk to both GW1 and GW2, and vice versa, but only GW2 should lead to the internet TLDR; H1 should only be allowed internet access via GW2, but still needs to be able to talk to both GW1 and GW2. EDIT: The interfaces for GW1 are br0.105 for the '99' network, and br0.111 for the '111' network. The sollution may or may not be obnoxiously simple, but i have not been able to produce the proper iptables syntax myself, so help would be most appreciated. PS: This is a follow-up question from this question

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • pfSense routing between two routers with shared network

    - by JohnCC
    I have a network set-up using two pfSense routers arranged like this:- DMZ1 WAN1 WAN2 DMZ2 | | | | | | | | \___ PF1 PF2___/ | | | | \___TRUSTED___/ Each pfSense router has its own separate WAN connection, and a separate DMZ network attached to it. They share a common TRUSTED LAN between them. The machines on the trusted network have PF1 as their default gateway. PF1 has a static route defined to DMZ2 via PF2, and PF2 has a static route to DMZ1 via PF1. There is NAT to the WAN but internal networks (DMZ1/2 and TRUSTED) use different RFC1918 subnets. I inherited this arrangement, and all used to work fine. I made a config change to PF1 (relating to multicast), and machines on DMZ2 suddenly could not talk to TRUSTED. I rolled the change back, but the problem persisted. What I guess you'd hope would happen is that TCP packets would go DMZ2 - PF2 - TRUSTED and on return TRUSTED - PF1 - PF2 - DMZ2. That's the only way I can see it would have worked. However, PF1 drops the returning packets. I've verified this using tcpdump. I've worked around this by adding static routes to DMZ2 via PF2 to the servers on TRUSTED, but some devices on there do not support static routes so this is not ideal. Is there way to make this arrangement work decently, or is the design inherently flawed? Thanks!

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration using the Ubuntu Enterprise Cloud Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....?

    Read the article

  • Two folders with identical names causing many problems.

    - by R. A. Chaucer
    In my 'Documents' folder (Win7) I have two folders with names that appear in Explorer to be identical, though their contents are different. I can rename them both to something else (eg: 'Test') and Explorer doesn't complain. The dir listing that cmd.exe and powershell gives me only lists one of them, but also lists this suspicious entry: 20/04/2010 12:16 PM <DIR> ???? Even if I rename the folders to have unique names, one of them still shows up as ???? in cmd.exe. Desktop.ini in my Documents folder doesn't contain anything out of the ordinary. Both folders appear to be read-only in their properties panel, and if I untick the read-only box it will ask me if I want to apply the action recursively, but either way when I close the panel and open it again the folder is once again read-only. They are both set to not inherit permissions. The folder that shows up correctly in the cmd.exe dir listing is the "real" one, the other seems to be automatically created when a program tries to access it. How is this possible?This is driving me nuts!

    Read the article

  • Logging communication between two VMs

    - by sYnfo
    Hi, I'm trying to set up "malware lab" described in this paper. So far, I've set up Windows guest system, adding one Host-only Network adapter, and setting this (sorry if the names aren't exactely correct, I don't have an english language version): - IP Address - 10.0.0.3 - Subnet mask - 255.255.255.0 - Default gateway - not set - Preferred DNS - 10.0.0.4 - Alternate DNS - not set And a Linux guest system - Ubuntu 9.04 - with two Network adapters - Bridged (eth0) and Host-only (eth1), and setting eth1 IP Address to 10.0.0.4, leaving the eth0 to be set by DHCP. Then, I have configured iptables as described in the paper, ie.: iptables -F -t nat iptables -F -t mangle iptables -t mangle -P PREROUTING ACCEPT iptables -t mangle -P OUTPUT ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -t nat -P OUTPUT ACCEPT iptables -t mangle -A PREROUTING -i eth0 -j ACCEPT iptables -t mangle -A PREROUTING -p udp -i eth1 -d 10.0.0.3 --dport 53 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 --dport 80 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 -d 10.0.0.3 --dport 6000:7000 -j ACCEPT iptables -t mangle -A PREROUTING -i eth1 -j ULOG iptables -t mangle -A PREROUTING -i eth1 -j DROP Now, when I try to ping the windows system from within the Linux system, it does not reply, I guess thats perfectly normal, because iptables is blocking ping responce. Same when I try to ping the Linux system from within the Windows. But when I try to access any web page from within the Windows system, I would expect that this action should get logged by iptables. But thing is, I don't see any of that kind of lines in log file (If I am looking in the right place, that is. :) It is at /var/log/messages, isn't it?). So, what do you think might be the problem here? I should note, that this is the first time I'm using linux, so don't expect ANY working knowledge of Linux at all... :) Also, since english is not my mother tongue, feel free to point out any gramatical mistakes... :) Thanks for any advice.

    Read the article

  • Run 3 monitors on two different video cards?

    - by hullot
    Can I run 3 monitors on two different video cards? I have an ATI and Nvidia brand card. The ATI has 2 HDMI connections. They both work. Both cards are also picked up in Windows, one being the ATI and the other one as the Nvidia, but it says VGA Controller, although the card only takes 2 DVi. So, one DVI cable goes into that Nvidia card. 3 Monitors, but only 2 the HDMI ones from the ATI pick up, not the third one which is connected to the Nvidia via DVI. How can I run three monitors then? I suppose I can't install both drivers, so I'm unsure what to do. Is this possible? I just want the Nvidia card to power the third screen, no gaming on it, nothing. Also the ATI is picked up as primary card as well, so no hurdle there. EDIT: Hm, just installed the Nvidia drivers and it picked up the third screen no problem. Hope there aren't any major conflicts. Will post this as an answer as correct when I'm able. Can't as a new user.

    Read the article

  • Issues with "There is already an object named 'xxx' in the database'

    - by Hoser
    I'm fairly new to SQL so this may be an easy mistake, but I haven't been able to find a solid solution anywhere else. Problem is whenever I try to use my temp table, it tells me it cannot be used because there is already an object with that name. I frequently try switching up the names, and sometimes it'll let me work with the table for a little while, but it never lasts for long. Am I dropping the table incorrectly? Also, I've had people suggest to just use a permanent table, but this database does not allow me to do that. create table #RandomTableName(NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue decimal) select vPerformanceRule.ObjectName, vPerformanceRule.CounterName, Perf.vPerfRaw.SampleValue into #RandomTableName from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfRaw where (ObjectName like 'Processor' AND CounterName like '% Processor Time') OR(ObjectName like 'System' AND CounterName like 'Processor Queue Length') OR(ObjectName like 'Memory' AND CounterName like 'Pages/Sec') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk Queue Length') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk sec/Read') OR(ObjectName like 'Physical Disk' and CounterName like '% Disk Time') OR(ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 70 AND SampleValue < 100) order by ObjectName, SampleValue drop table #RandomTableName

    Read the article

  • Two video card on one Mobo

    - by InfinityKing
    I recently purchased a new HP Pavilion p6-2265eo. It was then that I realised that it has only one DVI and one HDMI output.. I will connect my TV with the HDMI. so I am letft with only one DVI output. I need to have two monitors. Please help. Should I purchase a new video card and install it? My knowledge is limited. The specifications of my comp says that there is 1 x PCI-E x16, 3 x PCI-E x1. 1) I suppose that the video card already present in my purchase is connected on the PCI-E x16. Am I right? I dont want to open my desktop right now and check it for myself as it can void the warrenty. so I need an experienced person to tell me that. 2) I have an old nvidia geforce 7200 gt. Is it possible for me to connect it to my left over PCI-E x1? I searched for PCI-E x1 on the net and as far as I can understand the slot is too small for my old nvidia geforce 7200 gt graphic card. 3) what are the options? Please help this dummo :) Thankign you in advance,

    Read the article

  • Configuring two subnets with two NICS. Access from a NAS to the internet

    - by archipestre
    I am having trouble configuring my NAS. I have a DSL router with WIFI (192.168.1.1) in my flatmates room. In my room I have a server with two NICS: 1) wlan0 (192.168.1.2) that connects to the DSL router via wireless 2) em1 (192.168.0.1) that connects to the NAS (192.168.0.20) with a crossover cable. I have Fedora 17 and I have enable packet forwarding. My IP configuration is as follows: WLAN0 inet 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 EM1 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 My routing table looks like: Destination Gateway G enmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0 I have enable a static route in the DSL server: Status Network Destination Subnet Mask Interface Gateway Remove Edit Active 192.168.0.0 255.255.255.0 LAN 192.168.1.2 From my server I can ping the DSL router and the NAS. From the NAS I can ping both NICS of the server. However the NAS is unable to ping the DSL router or any address in the Internet. Any idea of what is wrong. Thank you in advance

    Read the article

  • Merging two separate DNS zones

    - by cube
    This is a hypothetical question. Let's suppose I have two networks, each with its own DNS server. Network A has names a1.local, a2.local, ... and network B has b1.local, b2.local, .... Zone file for each of the networks looks something like this: $ORIGIN local @ IN SOA .... blah blah blah a1 A 1.2.3.4 a2 A 2.3.4.5 ... for A, and $ORIGIN local @ IN SOA .... blah blah blah b1 A 3.4.5.6 b2 A 4.5.6.7 ... for B. Now I also have a regular internet domain example.com and I want to access the machines as a1.A.example.com, b1.B.example.com, ... How will I have to change the configuration of name servers in networks A and B? (in fact I am writing a super-magic DNS server, currently serving A and B separately, but there is a chance that I will have to add the ability to merge the networks; so I'm interested in knowing the problems which lie ahead of me and how to prepare for the possibility)

    Read the article

  • Exposing model object using bindings in custom NSCell of NSTableView

    - by Hooligancat
    I am struggling trying to perform what I would think would be a relatively common task. I have an NSTableView that is bound to it's array via an NSArrayController. The array controller has it's content set to an NSMutableArray that contains one or more NSObject instances of a model class. What I don't know how to do is expose the model inside the NSCell subclass in a way that is bindings friendly. For the purpose of illustration, we'll say that the object model is a person consisting of a first name, last name, age and gender. Thus the model would appear something like this: @interface PersonModel : NSObject { NSString * firstName; NSString * lastName; NSString * gender; int * age; } Obviously the appropriate setters, getters init etc for the class. In my controller class I define an NSTableView, NSMutableArray and an NSArrayController: @interface ControllerClass : NSObject { IBOutlet NSTableView * myTableView; NSMutableArray * myPersonArray; IBOutlet NSArrayController * myPersonArrayController; } Using Interface Builder I can easily bind the model to the appropriate columns: myPersonArray --> myPersonArrayController --> table column binding This works fine. So I remove the extra columns, leaving one column hidden that is bound to the NSArrayController (this creates and keeps the association between each row and the NSArrayController) so that I am down to one visible column in my NSTableView and one hidden column. I create an NSCell subclass and put the appropriate drawing method to create the cell. In my awakeFromNib I establish the custom NSCell subclass: PersonModel * aCustomCell = [[[PersonModel alloc] init] autorelease]; [[myTableView tableColumnWithIdentifier:@"customCellColumn"] setDataCell:aCustomCell]; This, too, works fine from a drawing perspective. I get my custom cell showing up in the column and it repeats for every managed object in my array controller. If I add an object or remove an object from the array controller the table updates accordingly. However... I was under the impression that my PersonModel object would be available from within my NSCell subclass. But I don't know how to get to it. I don't want to set each NSCell using setters and getters because then I'm breaking the whole model concept by storing data in the NSCell instead of referencing it from the array controller. And yes I do need to have a custom NSCell, so having multiple columns is not an option. Where to from here? In addition to the Google and StackOverflow search, I've done the obligatory walk through on Apple's docs and don't seem to have found the answer. I have found a lot of references that beat around the bush but nothing involving an NSArrayController. The controller makes life very easy when binding to other elements of the model entity (such as a master/detail scenario). I have also found a lot of references (although no answers) when using Core Data, but Im not using Core Data. As per the norm, I'm very grateful for any assistance that can be offered!

    Read the article

  • Model Object called in NSOutlineView DataSource Method Crashing App

    - by Alec Sloman
    I have a bewildering problem, hoping someone can assist: I have a model object, called Road. Here's the interface. @@@ @interface RoadModel : NSObject { NSString *_id; NSString *roadmapID; NSString *routeID; NSString *title; NSString *description; NSNumber *collapsed; NSNumber *isRoute; NSString *staff; NSNumber *start; NSArray *staffList; NSMutableArray *updates; NSMutableArray *uploads; NSMutableArray *subRoads; } @property (nonatomic, copy) NSString *_id; @property (nonatomic, copy) NSString *roadmapID; @property (nonatomic, copy) NSString *routeID; @property (nonatomic, copy) NSString *title; @property (nonatomic, copy) NSString *description; @property (nonatomic, copy) NSNumber *collapsed; @property (nonatomic, copy) NSNumber *isRoute; @property (nonatomic, copy) NSString *staff; @property (nonatomic, copy) NSNumber *start; @property (nonatomic, copy) NSArray *staffList; @property (nonatomic, copy) NSMutableArray *updates; @property (nonatomic, copy) NSMutableArray *uploads; @property (nonatomic, copy) NSMutableArray *subRoads; - (id)initWithJSONObject:(NSDictionary *)JSONObject; @end This part is fine. To give you some background, I'm translating a bunch of JSON into a proper model object so it's easier to work with. Now, I'm trying to display this in an NSOutlineView. This is where the problem is. In particular, I have created the table and a datasource. - (id)initWithRoads:(NSArray *)roads { if (self = [super init]) root = [[NSMutableArray alloc] initWithArray:roads]; return self; } - (NSInteger)outlineView:(NSOutlineView *)outlineView numberOfChildrenOfItem:(id)item { if (item == nil) return root.count; return 0; } - (BOOL)outlineView:(NSOutlineView *)outlineView isItemExpandable:(id)item { return NO; } - (id)outlineView:(NSOutlineView *)outlineView child:(NSInteger)index ofItem:(id)item { if (item == nil) item = root; if (item == root) return [root objectAtIndex:index]; return nil; } - (id)outlineView:(NSOutlineView *)outlineView objectValueForTableColumn:(NSTableColumn *)tableColumn byItem:(id)item { return [item title]; } In the final datasource method, it attempts to return the "title" string property of the model object, but for some reason crashes each time. I have checked that the method is taking in the correct object (I checked [item class] description], and it is the right object), but for some reason if I call any of the objects accessors the app immediately crashes. This is totally puzzling because in the init method, I can iterate through root (an array of RoadModel objects), and print any of its properties without issue. It is only when I'm trying to access the properties in any of the datasource methods that this occurs. I wonder if there is something memory-wise that is going on behind the scenes and I am not providing for it. If you can shed some light on to this situation, it would be greatly appreciated! Thanks in advance!

    Read the article

  • How to use JAXB to process messages from two separate schemas (with same rootelement name)

    - by sairn
    Hi We have a client that is sending xml messages that are produced from two separate schemas. We would like to process them in a single application, since the content is related. We cannot modify the client's schema that they are using to produce the XML messages... We can modify our own copies of the two schema (or binding.jxb) -- if it helps -- in order to enable our JAXB processing of messages created from the two separate schemas. Unfortunately, both schemas have the same root element name (see below). QUESTION: Does JAXB prohibit absolutely the processing two schemas that have the same root element name? -- If so, I will stop my "easter egg" hunt for a solution to this... ---Or, is there some workaround that would enable us to use JAXB for processing these XML messages produced from two different schemas? schema1 (note the root element name: "A"): <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="AA"> <xsd:complexType> <xsd:sequence> <xsd:element name="AAA1" type="xsd:string" /> <xsd:element name="AAA2" type="xsd:string" /> <xsd:element name="AAA3" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="BB"> <xsd:complexType> <xsd:sequence> <xsd:element name="BBB1" type="xsd:string" /> <xsd:element name="BBB2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> schema2 (note, again, using the same root element name: "A") <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="CCC"> <xsd:complexType> <xsd:sequence> <xsd:element name="DDD1" type="xsd:string" /> <xsd:element name="DDD2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE"> <xsd:complexType> <xsd:sequence> <xsd:element name="EEE1"> <xsd:complexType> <xsd:sequence> <xsd:element name="FFF1" type="xsd:string" /> <xsd:element name="FFF2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema>

    Read the article

  • Android inserting into SQLite JSON object

    - by Nizmon
    I'm trying to insert the below json response from a server into my sqlite DB and then read from the DB. The problem I am getting is that when I run my code it compiles fine and runs with no errors. But when trying to read from the DB I just get back what seems like an empty string so I know that the table is being created. I have the correct permissions within the manifest. I have also reference all my classes within there. {"locations": [{"locations":"Newcastle","location_id":"1"},{"locations":"London","location_id":"2"},{"locations":"Sunderland","location_id":"3"}]} Below where I use: Log.v("one", one); Log.v("two", two); below It only prints out the first set within the JSON object so Newcastle and 1. I don't get any errors at all which is stumping me. When I call the method getName within the Location class I just seem to get a blank string back! // This class creates the table as well as inserts and returns data from the sqlite DB public class Location { private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private final Context mCtx; private static final String vd_location = ("CREATE TABLE " + TABLE_VD_LOCATION + " (" + LOCATION + " TEXT," + LOCATION_ID + " TEXT " +");"); private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(vd_location); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { } } public Location(Context ctx) { this.mCtx = ctx; } public Location open() throws SQLException { this.mDbHelper = new DatabaseHelper(this.mCtx); this.mDb = this.mDbHelper.getWritableDatabase(); return this; } public void close() { this.mDbHelper.close(); } public void addOffer (JSONObject json){ try{ JSONArray earthquakes = json.getJSONArray("offers"); for(int i=0;i<earthquakes.length();i++){ open(); JSONObject e = earthquakes.getJSONObject(i); String one = e.getString("locations"); String two = e.getString("location_id"); Log.v("one", one); Log.v("two", two); mDb.execSQL("INSERT INTO " + TABLE_VD_LOCATION + " ( locations, location_id ) " + "VALUES (?, ?)", new Object [] { e.getString("locations"), e.getString("location_id")}); close(); } }catch(Exception e){ }finally{ close(); } } public String getName(long l) throws SQLException{ String[] columns = new String[]{ LOCATION, LOCATION_ID}; Cursor c = mDb.query(TABLE_VD_LOCATION, columns, null, null, null, null, null); String result = ""; int iRow = c.getColumnIndex(LOCATION); int iName = c.getColumnIndex(LOCATION_ID); for (c.moveToFirst(); !c.isAfterLast(); c.moveToNext()){ result = result + c.getString(iRow) + " " + c.getString(iName) + " " + "\n"; } return result; } } // This code reads from the DB, this is just some very hacked together code so excuse it, it also works when used on other tables public void getData(){ boolean didItWork = true; try { Location loc = new Location(this); loc.open(); String data = loc.getName(0); loc.close(); Dialog t = new Dialog(this); t.setTitle("get" + data); t.show(); } catch (Exception e) { didItWork = false; String error = e.toString(); Dialog d = new Dialog(this); d.setTitle("Dang it!"); TextView tv = new TextView(this); tv.setText(error); d.setContentView(tv); d.show(); } finally { if (didItWork) { Dialog d = new Dialog(this); d.setTitle("Heck Yea!"); TextView tv = new TextView(this); tv.setText("Success"); d.setContentView(tv); d.show(); //entry.close(); } } }

    Read the article

  • xts error - order.by requires an appropriate time-based object

    - by Samo
    I can not resolve why error in simple creation of xts object xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE)) Error in xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE)) : order.by requires an appropriate time-based object appeared while this was working perfectly 14 days ago when I last used the same code (since then the only difference is that TICK.NYSE grow in length since data was added since then). More details below: > Sys.getenv("TZ") [1] "EST" > tail(xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE))) Error in xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE)) : order.by requires an appropriate time-based object > class(index(TICK.NYSE["T09:30/T09:31"])) [1] "POSIXct" > tail(xts(rep(0, NROW(tail(TICK.NYSE))), order.by = index(tail(TICK.NYSE)))) Error in xts(rep(0, NROW(tail(TICK.NYSE))), order.by = index(tail(TICK.NYSE))) : order.by requires an appropriate time-based object > tail(TICK.NYSE) TICK-NYSE.Open TICK-NYSE.High TICK-NYSE.Low TICK-NYSE.Close 2012-03-15 14:54:00 -278 -89 -299 -89 2012-03-15 14:55:00 -89 427 -89 201 2012-03-15 14:56:00 201 318 30 234 2012-03-15 14:57:00 234 242 -222 -64 2012-03-15 14:58:00 -64 346 -82 346 2012-03-15 14:59:00 346 525 36 525 TICK-NYSE.Volume TICK-NYSE.WAP TICK-NYSE.hasGaps 2012-03-15 14:54:00 0 0 0 2012-03-15 14:55:00 0 0 0 2012-03-15 14:56:00 0 0 0 2012-03-15 14:57:00 0 0 0 2012-03-15 14:58:00 0 0 0 2012-03-15 14:59:00 0 0 0 TICK-NYSE.Count 2012-03-15 14:54:00 31 2012-03-15 14:55:00 31 2012-03-15 14:56:00 31 2012-03-15 14:57:00 31 2012-03-15 14:58:00 29 2012-03-15 14:59:00 30 > str(TICK.NYSE) An ‘xts’ object from 2011-01-18 09:30:00 to 2012-03-15 14:59:00 containing: Data: num [1:114090, 1:8] -5 -144 24 -148 -184 -77 20 121 111 -60 ... - attr(*, "dimnames")=List of 2 ..$ : NULL ..$ : chr [1:8] "TICK-NYSE.Open" "TICK-NYSE.High" "TICK-NYSE.Low" "TICK-NYSE.Close" ... Indexed by objects of class: [POSIXct,POSIXt] TZ: xts Attributes: List of 4 $ from : chr "20110119 23:59:59" $ to : chr "20110124 23:59:59" $ src : chr "IB" $ updated: POSIXct[1:1], format: "2012-01-19 02:34:52" > str(index(TICK.NYSE)) Class 'POSIXct' atomic [1:114090] 1.3e+09 1.3e+09 1.3e+09 1.3e+09 1.3e+09 ... ..- attr(*, "tzone")= chr "" ..- attr(*, "tclass")= chr [1:2] "POSIXct" "POSIXt" > Sys.getenv("TZ") [1] "EST" > tail(index(TICK.NYSE)) [1] "2012-03-15 14:54:00 EST" "2012-03-15 14:55:00 EST" [3] "2012-03-15 14:56:00 EST" "2012-03-15 14:57:00 EST" [5] "2012-03-15 14:58:00 EST" "2012-03-15 14:59:00 EST" > head(index(TICK.NYSE)) [1] "2011-01-18 09:30:00 EST" "2011-01-18 09:31:00 EST" [3] "2011-01-18 09:32:00 EST" "2011-01-18 09:33:00 EST" [5] "2011-01-18 09:34:00 EST" "2011-01-18 09:35:00 EST" > Sys.info() sysname "Linux" release "3.0.0-16-generic" version "#28-Ubuntu SMP Fri Jan 27 17:44:39 UTC 2012" > sessionInfo() R version 2.14.1 (2011-12-22) Platform: x86_64-pc-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=en_US.UTF-8 [9] LC_ADDRESS=en_US.UTF-8 LC_TELEPHONE=en_US.UTF-8 [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=en_US.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] lattice_0.20-0 multicore_0.1-7 [3] doSNOW_1.0.5 snow_0.3-8 [5] doRedis_1.0.4 rredis_1.6.3 [7] foreach_1.3.2 codetools_0.2-8 [9] iterators_1.0.5 PerformanceAnalytics_1.0.3.3 [11] quantstrat_0.6.1 blotter_0.8.4 [13] twsInstrument_1.3-3 FinancialInstrument_0.10.6 [15] IBrokers_0.9-6 quantmod_0.3-18 [17] TTR_0.21-0 xts_0.8-4 [19] Defaults_1.1-1 strucchange_1.4-6 [21] sandwich_2.2-8 zoo_1.7-7 [23] rj_1.0.2-5 loaded via a namespace (and not attached): [1] grid_2.14.1 tools_2.14.1 > dput(tail(TICK.NYSE)) structure(c(-385, -213, -42, -334, -233, -111, -121, 20, -14, -125, -73, 265, -583, -269, -426, -520, -443, -440, -213, -42, -334, -233, -111, 119, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 31, 31, 30, 30, 31, 31), class = c("xts", "zoo" ), .indexCLASS = c("POSIXct", "POSIXt"), .indexTZ = "", from = "20110119 23:59:59", to = "20110124 23:59:59", src = "IB", updated = structure(1326958492.96405, class = c("POSIXct", "POSIXt")), index = structure(c(1331927640, 1331927700, 1331927760, 1331927820, 1331927880, 1331927940), tzone = "", tclass = c("POSIXct", "POSIXt")), .Dim = c(6L, 8L), .Dimnames = list(NULL, c("TICK-NYSE.Open", "TICK-NYSE.High", "TICK-NYSE.Low", "TICK-NYSE.Close", "TICK-NYSE.Volume", "TICK-NYSE.WAP", "TICK-NYSE.hasGaps", "TICK-NYSE.Count")))

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >