Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 219/467 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Crash after plugging in the Logitech H600 wireless transmitter

    - by SneezyDinosaur
    I recently bought the Logitech H600 Wireless Headset. When I plug in the USB wireless transmitter, Ubuntu 12.04 LTS crashes. How does that come and what do you suggest me to do? Same problem here, as soon as the H600 headset is plugged in 12.04 crashes. Relevant snip from /var/log/Xorg.failsafe.log is below, complete log can be found as a gist [ 2676.496] (**) Logitech Logitech Wireless Headset: Applying InputClass "evdev keyboard catchall" [ 2676.496] (II) Using input driver 'evdev' for 'Logitech Logitech Wireless Headset' [ 2676.496] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 2676.496] (**) Logitech Logitech Wireless Headset: always reports core events [ 2676.496] (**) evdev: Logitech Logitech Wireless Headset: Device: "/dev/input/event16" [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Vendor 0x46d Product 0xa29 [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Found absolute axes [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Found absolute multitouch axes [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Found keys [ 2676.496] (II) evdev: Logitech Logitech Wireless Headset: Configuring as mouse [ 2676.496] (II) evdev: Logitech Logitech Wireless Headset: Configuring as keyboard [ 2676.496] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2.2/2-1.2.2:1.3/input/input16/event16" [ 2676.496] (II) XINPUT: Adding extended input device "Logitech Logitech Wireless Headset" (type: KEYBOARD, id 10) [ 2676.496] (**) Option "xkb_rules" "evdev" [ 2676.496] (**) Option "xkb_model" "pc105" [ 2676.496] (**) Option "xkb_layout" "us" [ 2676.497] (II) evdev: Logitech Logitech Wireless Headset: initialized for absolute axes. [ 2676.497] Backtrace: [ 2676.497] 0: /usr/bin/X (xorg_backtrace+0x37) [0xb7698637] [ 2676.497] 1: /usr/bin/X (0xb7510000+0x18c3ba) [0xb769c3ba] [ 2676.497] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb74ed40c] [ 2676.497] 3: /lib/i386-linux-gnu/libc.so.6 (0xb7155000+0x135d32) [0xb728ad32] [ 2676.497] 4: /usr/bin/X (XIChangeDeviceProperty+0x16c) [0xb7630b0c] [ 2676.497] 5: /usr/lib/xorg/modules/input/evdev_drv.so (0xb3f4a000+0x634e) [0xb3f5034e]

    Read the article

  • Unknown CSS font-family oddity with IE7-10 on Win Vista-8

    - by Jeff
    I am seeing the following "oddity" with IE7-10 on Win Vista-8: When declaring font-family: serif; I am seeing an old bitmapped serif font that I can't identify (see screenshot below) instead of the expected font Times New Roman. I know it's an old bitmapped font because it displays aliased, without any font smoothing, with IE7-10 on Win Vista-8 (just like Courier on every version of Win). Screenshot: I would like to know (1) can anyone else confirm my research and (2) BONUS: which font is IE displaying? Notes: IE6 and IE7 on Win XP displays Times New Roman, as they should. It doesn't matter if font-family: serif; is declared in an external stylesheet or inline on the element. Quoting the CSS attribute makes no difference. Adding "Unkown Font" to the stack also makes no difference. New Screenshot: The answer from Jukka below is correct. Here is a new screenshot with Batang (not BatangChe) to illustrate. Hope this helps someone.

    Read the article

  • Need to sanity-check my .htaccess, especially Limit GET POST line for Google repellent

    - by jose
    I need a sanity check on this .htaccess (from a WordPress site) I inherited from a 5 month+ old site. What's the symptom? Google + Bing crawl, but don't index any of the pages. Let me be clear: I'm not mad about "not ranking high." I think something is (accidentally) rejecting search engine indexing. I am not an expert on .htaccess, but one part especially looked funny, the Limit GET POST line. Is it not weird to have both Allow and Deny all, with no parameters? Also, I've ruled out robots.txt, but if I were you I'd want to see it, so here it is: User-agent: * Crawl-delay: 30 And here's the more suspect .htaccess: # temp redirect wordpress content feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymousblog [R=302,NC,L] </IfModule> # temp redirect wordpress comment feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^comments/feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymous_comments [R=302,NC,L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti* <Limit GET POST> order deny,allow deny from all allow from all </Limit> <Limit PUT DELETE> order deny,allow deny from all </Limit> php_value memory_limit 32M Adding header by request: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="robots" content="noindex,nofollow" /> <meta name="description" content="buncha junk i've deleted." /> <meta name="keywords" content="keywords i've deleted" /> <meta name="viewport" content="width=device-width" />

    Read the article

  • How do I setup sendmail, postfix, or dovecot so that perl can send emails?

    - by M. Tibbits
    Direct Question: I want to enable Perl to send emails. What package(s) should I install to setup a simplistic email server: no need for incoming, only outgoing. I can forward through gmail's SMTP if that's best? Background: I am programmer with a nightly build script written in Perl. I would like to email myself the results of my nightly builds (especially if there's an error). I've read about the perl package Mail::Sendmail briefly, but if something else is more appropriate, please tell me!! I tried the simple aptget install sendmail, but that doesn't seem to work. I get the following errors: Server said: 421 4.3.0 collect: Cannot write ./dfp1PFXl7W020719 (bfcommit, uid=0, gid=120): No such file or directory message transmission error (421 4.3.0 collect: Cannot write ./dfp1PFXl7W020719 (bfcommit, uid=0, gid=120): No such file or directory ) Server said: 421 4.3.0 collect: Cannot write ./dfp1PFXl7W020719 (bfcommit, uid=0, gid=120): No such file or directory I've googled this problem a bit and tried a few things -- adding my username to /etc/mail/trusted-users and such, but to no avail. In other words, I would be most grateful if you could provide simple instructions for setting up an outgoing mail server. I really don't understand the specifics, but as I understand, I need to forward the mail through an existing SMTP server -- so I can use my gmail account if need be (that's where I want to send the logs anyway). Any suggestions would be most greatly appreciated.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Enum driving a Visual State change via the ViewModel

    - by Chris Skardon
    Exciting title eh? So, here’s the problem, I want to use my ViewModel to drive my Visual State, I’ve used the ‘DataStateBehavior’ before, but the trouble with it is that it only works for bool values, and the minute you jump to more than 2 Visual States, you’re kind of screwed. A quick search has shown up a couple of points of interest, first, the DataStateSwitchBehavior, which is part of the Expression Samples (on Codeplex), and also available via Pete Blois’ blog. The second interest is to use a DataTrigger with GoToStateAction (from the Silverlight forums). So, onwards… first let’s create a basic switch Visual State, so, a DataObj with one property: IsAce… public class DataObj : NotifyPropertyChanger { private bool _isAce; public bool IsAce { get { return _isAce; } set { _isAce = value; RaisePropertyChanged("IsAce"); } } } The ‘NotifyPropertyChanger’ is literally a base class with RaisePropertyChanged, implementing INotifyPropertyChanged. OK, so we then create a ViewModel: public class MainPageViewModel : NotifyPropertyChanger { private DataObj _dataObj; public MainPageViewModel() { DataObj = new DataObj {IsAce = true}; ChangeAcenessCommand = new RelayCommand(() => DataObj.IsAce = !DataObj.IsAce); } public ICommand ChangeAcenessCommand { get; private set; } public DataObj DataObj { get { return _dataObj; } set { _dataObj = value; RaisePropertyChanged("DataObj"); } } } Aaaand finally – hook it all up to the XAML, which is a very simple UI: A Rectangle, a TextBlock and a Button. The Button is hooked up to ChangeAcenessCommand, the TextBlock is bound to the ‘DataObj.IsAce’ property and the Rectangle has 2 visual states: IsAce and NotAce. To make the Rectangle change it’s visual state I’ve used a DataStateBehavior inside the Layout Root Grid: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding DataObj.IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> So now we have the button changing the ‘IsAce’ property and giving us the other visual state: Great! So – the next stage is to get that to work inside a DataTemplate… Which (thankfully) is easy money. All we do is add a ListBox to the View and an ObservableCollection to the ViewModel. Well – ok, a little bit more than that. Once we’ve got the ListBox with it’s ItemsSource property set, it’s time to add the DataTemplate itself. Again, this isn’t exactly taxing, and is purely going to be a Grid with a Textblock and a Rectangle (again, I’m nothing if not consistent). Though, to be a little jazzy I’ve swapped the rectangle to the other side (living the dream). So, all that’s left is to add some States to the template.. (Yes – you can do that), these can be the same names as the others, or indeed, something else, I have chosen to stick with the same names and take the extra confusion hit right on the nose. Once again, I add the DataStateBehavior to the root Grid element: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> The key difference here is the ‘Binding’ attribute, where I’m now binding to the IsAce property directly, and boom! It’s all gravy!   So far, so good. We can use boolean values to change the visual states, and (crucially) it works in a DataTemplate, bingo! Now. Onwards to the Enum part of this (finally!). Obviously we can’t use the DataStateBehavior, it' only gives us true/false options. So, let’s give the GoToStateAction a go. Now, I warn you, things get a bit complex from here, instead of a bool with 2 values, I’m gonna max it out and bring in an Enum with 3 (count ‘em) 3 values: Red, Amber and Green (those of you with exceptionally sharp minds will be reminded of traffic lights). We’re gonna have a rectangle which also has 3 visual states – cunningly called ‘Red’, ‘Amber’ and ‘Green’. A new class called DataObj2: public class DataObj2 : NotifyPropertyChanger { private Status _statusValue; public DataObj2(Status status) { StatusValue = status; } public Status StatusValue { get { return _statusValue; } set { _statusValue = value; RaisePropertyChanged("StatusValue"); } } } Where ‘Status’ is my enum. Good times are here! Ok, so let’s get to the beefy stuff. So, we’ll start off in the same manner as the last time, we will have a single DataObj2 instance available to the Page and bind to that. Let’s add some Triggers (these are in the LayoutRoot again). <i:Interaction.Triggers> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Amber"> <ei:GoToStateAction StateName="Amber" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Green"> <ei:GoToStateAction StateName="Green" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Red"> <ei:GoToStateAction StateName="Red" UseTransitions="False" /> </ei:DataTrigger> </i:Interaction.Triggers> So what we’re saying here is that when the DataObject2.StatusValue is equal to ‘Red’ then we’ll go to the ‘Red’ state. Same deal for Green and Amber (but you knew that already). Hook it all up and start teh project. Hmm. Just grey. Not what I wanted. Ok, let’s add a ‘ChangeStatusCommand’, hook that up to a button and give it a whirl: Right, so the DataTrigger isn’t picking up the data on load. On the plus side, changing the status is making the visual states change. So. We’ll cross the ‘Grey’ hurdle in a bit, what about doing the same in the DataTemplate? <Codey Codey/> Grey again, but if we press the button: (I should mention, pressing the button sets the StatusValue property on the DataObj2 being represented to the next colour). Right. Let’s look at this ‘Grey’ issue. First ‘fix’ (and I use the term ‘fix’ in a very loose way): The Dispatcher Fix This involves using the Dispatcher on the View to call something like ‘RefreshProperties’ on the ViewModel, which will in turn raise all the appropriate ‘PropertyChanged’ events on the data objects being represented. So, here goes, into turdcode-ville – population – me: First, add the ‘RefreshProperties’ method to the DataObj2: internal void RefreshProperties() { RaisePropertyChanged("StatusValue"); } (shudder) Now, add it to the hosting ViewModel: public void RefreshProperties() { DataObject2.RefreshProperties(); if (DataObjects != null && DataObjects.Count > 0) { foreach (DataObj2 dataObject in DataObjects) dataObject.RefreshProperties(); } } (double shudder) and now for the cream on the cake, adding the following line to the code behind of the View: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).RefreshProperties()); So, what does this *ahem* code give us: Awesome, it makes the single bound data object show the colour, but frankly ignores the DataTemplate items. This (by the way) is the same output you get from: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).ChangeStatusCommand.Execute(null)); So… Where does that leave me? What about adding a button to the Page to refresh the properties – maybe it’s a timer thing? Yes, that works. Right, what about using the Loaded event then eh? Loaded += (s, e) => ((MoreVisualStatesViewModel) DataContext).RefreshProperties(); Ahhh No. What about converting the DataTemplate into a UserControl? Anything is worth a shot.. Though – I still suspect I’m going to have to ‘RefreshProperties’ if I want the rectangles to update. Still. No. This DataTemplate DataTrigger binding is becoming a bit of a pain… I can’t add a ‘refresh’ button to the actual code base, it’s not exactly user friendly. I’m going to end this one now, and put some investigating into the use of the DataStateSwitchBehavior (all the ones I’ve found, well, all 2 of them are working in SL3, but not 4…)

    Read the article

  • MySQL Policy-Based Auditing Webinar Recording Now Availabile

    - by Rob Young
    For those who missed the live event, the recording of the "How to Add Policy-Based Auditing to your MySQL Applications" webinar is now available.  You can view it here. This presentation builds on my earlier blog post on MySQL Enterprise Audit that was announced at MySQL Connect in late September.  The web presentation expands on the introductory blog and covers: The regulatory problem to be solved (internal audit, PCI, Sarbanes-Oxley, HIPAA, others) MySQL Audit solutions for both Community and Enterprise users: General Log - use the basic features of the MySQL server MySQL 5.5 open audit API - or use your time and talent to build your own solution MySQL Enterprise Audit - or use the out of the box, ready for production solution from MySQL Simple, step-by-step process for installing, enabling and configuring the MySQL Enterprise Audit plugin for use with existing apps New variables and options for tuning the MySQL Enterprise Audit plugin for your specific use case Best practices for securing and managing audit log files and archived images Roadmap for adding an integrated solution around MySQL Enterprise Audit for MySQL only and Oracle/MySQL shops You can learn all the technical details on MySQL Enterprise Audit in the MySQL docs and learn all about MySQL Enterprise Edition and Auditing here. As always, thanks for your support of MySQL!

    Read the article

  • Make Network Manager use bridge for PPPoE instead of only working on ethernet?

    - by Azendale
    My ISP uses PPPoE on their DSL connections. I use Network Manager to connect to this using a bridged modem connected to eth0. Often, I want to test networking things, so a set myself up a KVM machine with a tap interface. I can then connect these interfaces to to virtual 'switches' by adding them to bridges. (I work for my ISP). Sometimes, I want to test cases where the PPPoE is connected more than once. For this, I would like to be able to add eth0 to my 'switch' (a bridge) so the VMs can have a 'bridged modem' connection to the internet. But I would like to still be able to run the PPPoE for my computer at the same time. Which means that I need to get network-manager to run PPPoE over the bridge (or eth0). The problem is that it considers eth0 (and the bridge) 'not managed' by network manager, so it refuses to use it. So, how can I have network manager dial PPPoE over a bridge?

    Read the article

  • Enjoy Seamless Reading at Twitter in Chrome

    - by Asian Angel
    Twitter can be a lot of fun but having to constantly use the More Button to view a large number of tweets is frustrating. All that you need to be rid of that frustration is the More Tweets! extension for Google Chrome. Before Here it is…the classic “More Button”. If you are only interested in viewing a few tweets on occasion then it is not a problem. But if you are looking at a large number of tweets on a daily basis then it can be very frustrating. Notice the last tweet from TinyHacker shown here… After After installing the extension the only thing that you will need to do is refresh your Twitter page if you had it open before-hand. Now there will be a seamless connection from page to page when you are reading through tweets. You can see the TinyHacker tweet from above followed oh so nicely by tweets from the second page…this is definitely an improvement. For those who may be curious if you are quick enough with your mouse you can see what the “automated connection process” looks like. Conclusion If you are tired of constantly clicking the “More Button” and just want to read tweets without interruption then you will be very satisfied after adding this extension to your browser. Links Download the More Tweets! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Integrate Twitter With Microsoft OutlookMake Mail.app’s Reading Pane More Like OutlookBlip.fm is a Fun Social Way to Share MusicDisable YouTube Comments while using ChromeAdd Shareaholic Goodness to Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • Game software design

    - by L. De Leo
    I have been working on a simple implementation of a card game in object oriented Python/HTML/Javascript and building on the top of Django. At this point the game is in its final stage of development but, while spotting a big issue about how I was keeping the application state (basically using a global variable), I reached the point that I'm stuck. The thing is that ignoring the design flaw, in a single-threaded environment such as under the Django development server, the game works perfectly. While I tried to design classes cleanly and keep methods short I now have in front of me an issue that has been keeping me busy for the last 2 days and that countless print statements and visual debugging hasn't helped me spot. The reason I think has to do with some side-effects of functions and to solve it I've been wondering if maybe refactoring the code entirely with static classes that keep no state and just passing the state around might be a good option to keep side-effects under control. Or maybe trying to program it in a functional programming style (although I'm not sure Python allows for a purely functional style). I feel that now there's already too many layers that the software (which I plan to make incredibly more complex by adding non trivial features) has already become unmanageable. How would you suggest I re-take control of my code-base that (despite being still only at < 1000 LOC) seems to have taken a life of its own?

    Read the article

  • Facebook like button for a Facebook feed item

    - by jverdeyen
    Here is what I do: When the admin of a website i'm creating adds a new 'news' item to his website, it directly posts the same item on the Facebook 'fanpage' of the website. This is done by an Facebook app with the permissions to post an item on the Facebook fanpage. So far so good. I'm getting the new post id from the facebook graph. Here is what I want to achieve: I'm adding a 'like' button on the original website, but when this is clicked it should be referenced to the same post item on the facebook page. So if someone likes the news post on the website he should also like the referencing facebook feed post. Which is exactly the same. Can this be done? I've been playing around with the href etc of the like button, without any success. When I hit the like button it likes the url given, but is doesn't recognizes that this is a feed post. Any idea? Thx in advance.

    Read the article

  • How should I work out VAT (UK tax) in my eCommerce site?

    - by Leonard Challis
    We have an ecommerce system in place. The sales actually go through Sage, so we have an export script from our system that uses a third-party Sage Importer program. With a new version of this importer, values are checked more thoroughly. We are getting 1 pence discrepancies because of the way rounding works - our system has always held prices and worked to 4 decimal places. In the checkout the totals would be worked out first, then the rounding to 2 decimal places. The importer does rounding first, though. So, for instance: Our way: Product 1: £13.4561 Qty: 2 Total inc VAT = £32.29 (to 2dp) Importer way: Our way: Product 1: £13.4561 Qty: 2 Total inc VAT = £32.30 (to 2dp) Management are reluctant to lose the 4dp but the developers of the Sage importer have said that this is correct and makes sense -- you woudn't sell a product for £13.4561 in a shop, nor would you charge someone tax at 4 decimal places. I contacted the HMRC and the operator didn't really give me much to go on, telling me a technician would phone back, to which they haven't and I'm still waiting after almost a week and numerous follow-up calls. I did find a PDF on the HMRC's web site, but this did about us much to confuse me as it did to answer my questions. I see that they're happy for people to round up or down, as long it is consistent, but I can't tell whether it should be done on a line by line basis or on the end total of the order. We are now in the position where we need to decide whether it's worth us doing one of the following, or something completely different. Please advise with any experience or information I can read. Change all products on the site to use 2dp Keep 4dp but round each line in the order to 2dp before working out tax Keep it as it is and "fudge" the values at the export script (i.e. make that values correct by adding or subtracting 1p and changing the shipping cost to make the totals still work out) Any thoughts?

    Read the article

  • Bridged VM guest does not get IPv6 prefix

    - by Arne
    I have a similar problem and setup as described in IPv6 does not work over bridge. My host get a IPv6 prefix but the guest VM only gets a local fe80-prefix. Using tcpdump I can see that solicit messages are going out from the guest but the host (ubuntu-server) doesn't seem to respond: arne@ubuntu-server:/var/log$ sudo tcpdump -i br0 host fe80::5054:00ff:fe4d:9ae0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 65535 bytes 14:31:15.314419 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 4 group record(s), length 88 14:31:15.322337 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 14:31:15.502374 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 14:31:15.743894 IP6 fe80::5054:ff:fe4d:9ae0.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit 14:31:15.802389 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 4 group record(s), length 88 14:31:17.906580 IP6 fe80::5054:ff:fe4d:9ae0.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit I had a firewall issue which I fixed by adding the following (copied from similar IPv4 before.rules settings) to /etc/ufw/before6.rules at the end before the commit statement: # allow bridging (for KVM) -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT I am running the host on a Ubuntu 14.04 server so I guess I could have used dnsmasq but I didn't find any howto for it so I used radvd (which had to be installed) with the following configuration in /etc/radvd.conf: interface br0 { AdvSendAdvert on; AdvLinkMTU 1480; prefix 2a01:79d:xxx::/64 { AdvOnLink on; AdvAutonomous on; }; }; This didn't help though so I guess I must have configured it wrong? Any help appreciated. Br, Arne PS: I wish the Ubuntu documentation included how to configure virtualization to work with IPv6

    Read the article

  • Help with complex MVVM (multiple views)

    - by jsjslim
    I need help creating view models for the following scenario: Deep, hierarchical data Multiple views for the same set of data Each view is a single, dynamically-changing view, based on the active selection Depending on the value of a property, display different types of tabs in a tab control My questions: Should I create a view-model representation for each view (VM1, VM2, etc)? 1. Yes: a. Should I model the entire hierarchical relationship? (ie, SubVM1, HouseVM1, RoomVM1) b. How do I keep all hierarchies in sync? (e.g, adding/removing nodes) 2. No: a. Do I use a huge, single view model that caters for all views? Here's an example of a single view Figure 1: Multiple views updated based on active room. Notice Tab control Figure 2: Different active room. Multiple views updated. Tab control items changed based on object's property. Figure 3: Different selection type. Entire view changes

    Read the article

  • Bypass Facebook Social Reader Apps using Google Chrome Extension

    - by Gopinath
    One of the most annoying features of Facebook  is it’s Social Reader Apps that share automatically whatever your read, watch or listen online.  I don’t like to share what ever I do online to Facebook as I want my privacy. Few of  my friends knowingly or unknowingly are using Social Reader apps and their online activity is automatically posted to the wall. To read these articles or watch videos shared by Social Reader application I need to add the application and allow it to automatically post. I don’t like Social Reader Apps and if you are one like me, here is a Google Chrome browser plugin that allows us to bypass Social Reader Apps. The extension Facebook Unsocial Reader smartly rewrites Facebook links in such a way that you will be able to access content of links without adding Social Reader Apps to your account. To rewrite the links, the extension cleverly uses Google I’m Feeling Lucky service and searches for the article’s title. The first search result of Google is almost perfect in identifying the original article link. If you are a heavy Facebook user and concerned about using Social Reader Apps, this plugin is must to have. Photo (cc) Josh Hallett. Facebook Unsocial Reader Extension for Google Chrome

    Read the article

  • The lifecycle of "cool"

    - by Dori
    I've been thinking lately about how some programming projects/products become "cool," and in particular, how that trend can later reverse. Here are two examples that might better explain my context: Textmate Whenever someone asks about text editors on OS X, the answer on the SE sites is an automatic "Textmate!" But looked at objectively: Textmate 1.0 shipped October 2004 Textmate 1.5 shipped January 2006 Textmate 2 was announced February 2006 As of September 2010, the currently shipping version is 1.5.9 In all of 2010, there have been a total of three posts on the Textmate blog At what point (if ever) do Textmate fans start thinking about switching to another text editor? When it breaks after some future Apple update? When alpha geeks they respect start recommending something else? Or? jQuery Whenever a JavaScript-related question is asked on the SE sites, the knee-jerk response is "jQuery!" I've seen it happen even when the question itself only required a single line of JavaScript. Or when the question could be better answered by using CSS. Do the answerers understand they're suggesting a blowtorch to light a candle? That they're recommending adding 70K or so of code to do something trivial? Or is it a symptom of "When you have a hammer, everything looks like a nail"—that is, jQuery is all they know how to do, so that's their recommendation? And do they understand that while they may know jQuery well, that doesn't necessarily mean that they know JavaScript? Is there a way to explain that learning JavaScript would make them better jQuery programmers? My bigger-picture questions: Is this niche focus primarily a trait of programmers? How do you get programmers to not immediately jump to recommending their personal favorites? What can motivate programmers to review their initial selection criteria and possibly modify their choice? Your thoughts?

    Read the article

  • Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds

    - by JuergenKress
    Do you need to build a demo for a customer? Why not to use Fusion Order Demo (FOD) and modify it to do some extra things. Great idea, let me install it on one of my Linux servers I said "Turns out there are a few gotchas, so here is how I installed it on a Linux server with JDeveloper on my Windows desktop." Task 1: Install Oracle JDeveloper Studio I already had JDeveloper 11.1.1.6 with SOA extensions installed so this was easy. Task 2: Install the Fusion Order Demo Application First thing to do is to obtain the latest version of the demo from OTN, I obtained the R1 PS5 release. Gotcha #1 – my winzip wouldn’t unzip the file, I had to use 7-Zip. Task 3: Install Oracle SOA Suite On the domain modify the setDomainEnv script by adding “-Djps.app.credential.overwrite.allowed=true” to JAVA_PROPERTIES and restarting the Admin Server. Also set the JAVA_HOME variable and add Ant to the path. I created a domain with separate SOA and BAM servers and also set up the Node Manager to make it easier to stop and start components. Read the full blog post by Antony. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Fusion Order Demo on 11.1.1.6,Antony Reynolds,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Partitioning During Installation, Reinstalling Ubuntu, Backup donwloaded Repositories

    - by user209645
    I am new to Ubuntu and I have just finished installing 13.10 amd64 on my PC. This will replace my Windows 7 as my only OS. I just want to clarify some issues that've been bugging me. I tried reading posts with the same topics but I just can't wrap my head to it yet. I partitioned my 80GB drive into: /root: 30GB (sorry for the confusion, I actually meant /) /home: 40GB /var : 3GB swap : 4GB (2GB of mem) Please correct me if I'm wrong about these: All of the users' documents are saved in their respective folders in /home. But say I want to clean install (format) Ubuntu, I don't need to make backups of /home and /var as they are on separate partitions. But when re-installing, do I just choose /root and format and it will recognize all the partitions (not making another /home and /var inside /root)? Downloaded packages (from all the repositories) and all their dependencies are saved in /var. So after re-installing on the same PC (assuming I'm offline), it will just use the latest updates in /var if I choose to update? And if all the installed apps and their dependents are all in there, all I need to do is re-install them without encountering errors? I have also read that you can back them up using aptoncd and then adding the DVD to the sources. So if I download all the high ranking apps using Synaptic, could I then have an all-in-one DVD installer? 30GB for / is excessive because the bulk of files will either be in /home (personal, downloads, music, videos) or /var (updates, packages, installed apps)? Please excuse me for asking such a question but I really want to explore and learn more.

    Read the article

  • Oracle Access Manager 11.1.2 Certified with E-Business Suite 12

    - by Elke Phelps (Oracle Development)
    I am happy to announce that Oracle Access Manager 11gR2 (11.1.2) is now certified with E-Business Suite Releases 12.0.6 and 12.1. If you are implementing single sign-on for the first time, or are an existing Oracle Access Manager user, you may integrate with Oracle Access Manager 11gR2 using Oracle Access Manager WebGate and Oracle E-Business Suite AccessGate. Supported Architecture and Release Versions Oracle Access Manager 11.1.2 Oracle E-Business Suite Release 12.0.6, 12.1.1+ Oracle Identity Management 11.1.1.5, 11.1.1.6 Oracle Internet Directory 11.1.1.6 Oracle WebLogic Server 10.3.0.5+ What's New In This Oracle Access Manager 11gR2 Integration? Simplified integration: We've simplified the instructions and cut the number of pages, while adding clarity to the steps. Automation of configuration steps:  We've automated some of the required configuration steps. This is the first phase of automation and diagnostics that are part of our roadmap for this integration. Use of default OAM Login page: We are reducing the required troubleshooting by delivering the default OAM Login page for the integration. A custom login page can still be created by using Oracle Access Manager. Use of the Detached Credential collector in a Demilitarized Zone: We have certified the Detached Credential collector as part of a DMZ configuration. This will enhance the security of the underlying Oracle Access Manager and E-Business Suite components, which will now be required only within a company's intranet.   Choosing the Right Architecture Our previously published blog article and support note with single sign-on recommended and certified integration paths has been updated to include Oracle Access Manager 11gR2: Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Note 1388152.1) Other References Integrate with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Note 1484024.1) Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Note 1388152.1) Related Articles Understanding Options for Integrating Oracle Access Manager with E-Business Suite Why Does E-Business Suite Integration with OAM Require Oracle Internet Directory? In-Depth: Using Third-Party Identity Managers with E-Business Suite Release 12

    Read the article

  • jQuery + Perl CGI to vb.net transition

    - by user1257458
    I've been developing oracle database-heavy "web applications" forever by building my html by hand, adding some jquery to handle ajax requests (html inserts for forms processing etc), and always did my server side stuff in perl cgi. I really love how easy it is to read some form input, execute some select statements through dbi (SO EASY), and generate HTML to be inserted by the jquery request. That's a web application to me. However, my new boss builds everything in visual studio 2010, vb.net, usually webforms. So, for work reasons, I now need to start developing in vb.net so it can be collectively maintained, and I'm just seeking advice on where to start learning/how to approach this. I know I could at least learn ASP.net and VB.net, and create a webform, have it read parameters, return HTML, etc. which would allow me to use my previously written HTML and client-side scripts (jQuery). Although- since we're moving heavily to mobile applications I really need to reduce client-side processing load. Is there any advantage to my boss' method? Thanks a ton.

    Read the article

  • "Are You There?".. India Tops Logistics List of Emerging Nations

    - by [email protected]
    It's just amazing how far, wide and deep modern supply chains are extending. AMR reported on 15 Apr (M.Burkett, A.Reese) in a SCM webcast that 'Penetrating Emerging Markets" was the top priotiy for organizations based on a recent survey. I took this as both adding new consumers to their prospect-list as well as leveraging 'lower cost labor arbitrage". (Read '3 Billion Capitalists") Supply Chain Quarterly reports that India and Brazil received the highest ranking of the logistics markets in developing nations India tops the list of emerging nations that scores the attractiveness of logistics markets to foreign investors. Developed by the UK-based research firm Transport Intelligence, the new  Emerging Market Logistics Index rated 38 developing countries on 3 factors. 1. "Market size and growth attractiveness," considered a country's economic output, projected growth rate, and population size.  2. "Market compatibility," which examined how well-matched a nation was with the services offered by global logistics providers. This includes a country's security levels, market accessibility, foreign direct investment, distribution of wealth and population, and development of its service sector. 3. "Connectedness," which rated the efficiency of customs and border controls, liner shipping connections, and transportation infrastructure. India claimed the top spot due to its market size and growth prospects. Brazil is second because of its economic performance, good levels of market accessibility, and improving domestic and international transport connections. Are you there? For more information see www.transportintelligence.com/articles_papers. The top 10 emerging countries India Brazil Indonesia Mexico Russia Turkey United Arab Emirates Egypt Saudi Arabia Malaysia Source: Transport Intelligence, The Emerging Markets Logistics Index, March 2010

    Read the article

  • Bridging the gap between developers and testers with VS 2010

    - by Etienne Tremblay
    Hey everyone, I know it’s been an eternity since I blogged but I have so much to do that I unfortunately need to prioritize.  Vincent Grondin and I did a 7h presentation on the new developer and tester tools available in the VS 2010 suite.  It was a blast.  We did it in front of an audience (around 120) and it was taped.  We did it as a play and really didn’t look at the crowd at all we were training each other on the technology. It is now available for anyone that would like to watch it at this location: http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx What we covered in the full day event was Migration to TFS 2010 (10h00) 1-Migration of VSS to TFS (20 min.) 2-Automating the Build (Something you can't do with VSS) ( 20 Min.) 3-User story (Real application context for this presentation) (20 min.) 10h00 Pause Manuel Tests by Dev ( 11h30) 4-Adding a tester to the team (Into to MTM) (20 min.) 5-Define tests (what is a white bug) (20 min.) 6-Fix the bug and show Intellitrace and Play back the test (20 min.) 12h15 Lunch Manuel testing for maintenance (13h30) 7- Implement new Feature (web service) and Identify bug with MTM and branch for a production fix and also add a new Build script (20 min.) 8- Fix bug in production branch, Playback tests, merge the change in main branch (20 min.) Manuel testing with the lab manager (14h30) 9- Intro to Lab manager and environment (20 min.) 10- Change build script to deploy to lab and test with web service in lab environment. (20 min.) 15h15 Pause Automate UI test with CodeUI (15h30) 11- Reducing the effort of testing the UI (20 min.) 12- Repeating testing to make sure the application is working properly (20 min.) 13- Automate Coded UI with the Lab environment (20 min.) 16h30 Conclusions As you can see lots of stuff!! Enjoy the show and let us know how you like it Cheers, ET Technorati Tags: VS 2010,Testing Tools,ALM,Training

    Read the article

  • Cedarview drivers boot to a blank screen

    - by map7
    I'm following the tutorial following tutorial to get my Intel D2700DC motherboard's graphics working: http://daily.siebler.eu/2012/06/ubuntu-12-04-driver-for-intel-cedarview-atom-n2000-und-d2000-serie/ When I boot I'm getting a blank screen. I followed the tutorial and read all the comments. I've also tried: Install gdm and use this instead of lightdm (ubuntu default) sudo apt-get install gdm Remove previous pae kernel: http://www.liberiangeek.net/2011/11/remove-old-kernels-in-ubuntu-11-10-oneiric-ocelot/ Reboot before adding cedarview packages. Have tried with and without the "video=LVDS-1:d" to GRUB_CMDLINE_LINUX_DEFAULT variable in /etc/default/grub I still get a blank screen. I am plugged into a HD screen through the HDMI and have tried the DVI connector also. I can see the grub menu, then a little of the loading and then 'No signal'. I can still ssh into the box though so it is logging in. lspci -v -s 00:02.0 00:02.0 VGA compatible controller: Intel Corporation Atom Processor D2xxx/N2xxx Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation Device 2011 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at 80100000 (32-bit, non-prefetchable) [size=1M] I/O ports at 20d0 [size=8] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: pvrsrvkm Kernel modules: cedarview_gfx uname -a Linux test-desktop 3.2.0-35-generic #55-Ubuntu SMP Wed Dec 5 17:45:18 UTC 2012 i686 i686 i386 GNU/Linux

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • ParticleSystem in Slick2d (with MarteEngine)

    - by Bro Kevin D.
    First of all, sorry if this sounds very newbie-ish. I'm stuck at making a ParticleSystem I made using Pedigree to work in my game. It's basically an explosion that I want to display whenever an enemy dies. The ParticleSystem has two emitters, smoke and explosion I tried putting it in my Enemy (extends Entity) class Enemy extends Entity class @Override public void update(GameContainer gc, int delta) throws SlickException { super.update(gc, delta); /** bunch of codes */ explosionSystem.update(delta); } @Override public void render(GameContainer gc, Graphics gfx) throws SlickException { super.render(gc, gfx); if(isDestroyed) { explosionSystem.render(x,y); if(explosionSystem.getEmitter(1).completed()) { this.destroy(); } } } And it does not render. I'm not sure if this is the proper way of implementing it, as I've considered creating an Entity to serve as controller for all the Enemies. Right now, I'm just adding enemies every second. So how do I render the ParticleSystem when the enemy dies? If anyone can point me to the right direction. Thank you for your time.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >