Search Results

Search found 21140 results on 846 pages for 'control theory'.

Page 426/846 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Scheduler Controls for ASP.NET and WinForms - v2010 vol 1

    Check out the features that the ASPxScheduler and XtraScheduler suites will be getting for DXperience v2010 vol 1: Automatic Time Cell Sizing in Scheduler Reports Time cells can now automatically adjust size depending on content. You can control whether cells should be shrink so that no empty space is used and whether they can be automatically enlarged to fit all available appointments. The following image demonstrates how a calendar changes when you activate Auto Time Cell Sizing. Smart...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Exploring TCP throughput with DTrace

    - by user12820842
    One key measure to use when assessing TCP throughput is assessing the amount of unacknowledged data in the pipe. This is sometimes termed the Bandwidth Delay Product (BDP) (note that BDP is often used more generally as the product of the link capacity and the end-to-end delay). In DTrace terms, the amount of unacknowledged data in bytes for the connection is the different between the next sequence number to send and the lowest unacknoweldged sequence number (tcps_snxt - tcps_suna). According to the theory, when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput. In other words, if we can fill the pipe without the peer TCP complaining (by virtue of its window size reaching 0), we are purely bandwidth-limited. If the peer's receive window is too small however, the sending TCP has to wait for acknowledgements before it can send more data. In this case the round-trip time (RTT) limits throughput. In such cases the effective throughput limit is the window size divided by the RTT, e.g. if the window size is 64K and the RTT is 0.5sec, the throughput is 128K/s. So a neat way to visually determine if the receive window of clients may be too small should be to compare the distribution of BDP values for the server versus the client's advertised receive window. If the BDP distribution overlaps the send window distribution such that it is to the right (or lower down in DTrace since quantizations are displayed vertically), it indicates that the amount of unacknowledged data regularly exceeds the client's receive window, so that it is possible that the sender may have more data to send but is blocked by a zero-window on the client side. In the following example, we compare the distribution of BDP values to the receive window advertised by the receiver (10.175.96.92) for a large file download via http. # dtrace -s tcp_tput.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 6 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 9 4096 | 14 8192 | 27 16384 | 67 32768 |@@ 1464 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 32396 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 16384 | 0 32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 17067 65536 | 0 Here we have a puzzle. We can see that the receiver's advertised window is in the 32768-65535 range, while the amount of unacknowledged data in the pipe is largely in the 65536-131071 range. What's going on here? Surely in a case like this we should see zero-window events, since the amount of data in the pipe regularly exceeds the window size of the receiver. We can see that we don't see any zero-window events since the SWND distribution displays no 0 values - it stays within the 32768-65535 range. The explanation is straightforward enough. TCP Window scaling is in operation for this connection - the Window Scale TCP option is used on connection setup to allow a connection to advertise (and have advertised to it) a window greater than 65536 bytes. In this case the scaling shift is 1, so this explains why the SWND values are clustered in the 32768-65535 range rather than the 65536-131071 range - the SWND value needs to be multiplied by two since the reciever is also scaling its window by a shift factor of 1. Here's the simple script that compares BDP and SWND distributions, fixed to take account of window scaling. #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @bdp["BDP(bytes)", args[2]-ip_daddr, args[4]-tcp_sport] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); } tcp:::receive / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @swnd["SWND(bytes)", args[2]-ip_saddr, args[4]-tcp_dport] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } And here's the fixed output. # dtrace -s tcp_tput_scaled.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 39 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 22 16384 | 37 32768 |@ 99 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3858 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 512 | 0 1024 | 1 2048 | 0 4096 | 2 8192 | 4 16384 | 7 32768 | 14 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1956 131072 | 0

    Read the article

  • Volume indicator issue after xubuntu 13.10 upgrade

    - by misterjinx
    Last night I decided to upgrade my system to the latest version of Xubuntu, 13.10. The process went fine, but now I'm facing this strange issue. There is no sound settings available in the settings manager window and the volume indicator looks like when the volume is muted, clicking the indicator being broken as well. The indicator looks like this: I tried to do a alsa force-reload followed by a restart of the computer, but didn't help. Any thoughts ? l.e. After some digging I found out that the volume control exists, so this must be a volume indicator issue.

    Read the article

  • css equivalent of table-row [closed]

    - by SpashHit
    I am trying to shift my style away from using tables to control formatting, but I haven't seen a simple css solution that does exactly the same thing as <table><tr><td>aribitrary-html-A</td><td>aribitrary-html-A</td></tr><table> All I want is to make sure aribitrary-html-A and aribitrary-html-B are aligned horizontally. I have tried various CSS concoctions using display: inline, clear: none, and float: left but they all have unwanted side-effects of moving my content around, while the table-tr solution just does what I want, regardless of what's in the arbitrary HTML, and regardless of what is in HTML that contains my table. Am I missing something?

    Read the article

  • SQL Server 2008 Service Pack 1 and the Invoke or BeginInvoke cannot be called error message

    - by Jeff Widmer
    When trying to install SQL Server 2008 Service Pack 1 to a SQL Server 2008 instance that is running on a virtual machine, the installer will start:   But then after about 20 seconds I receive the following error message: TITLE: SQL Server Setup failure. ----------------------------- SQL Server Setup has encountered the following error: Invoke or BeginInvoke cannot be called on a control until the window handle has been created. ------------------------------ BUTTONS: OK ------------------------------ Searching for this issue I found that several people have the same problem and there is no clear solution.  Some had success with closing windows or Internet Explorer but that didn’t work for me; what did work is to make sure the SQL Server 2008 “Please wait while SQL Server 2008 Setup processes the current operation.” dialog is selected and has the focus when it first shows up.  Selected (with the current focus) it looks like this:   Without focus the dialog looks like this: Add a comment if you find out any information about how to consistently get around this issue or why it is happening in the first place.

    Read the article

  • DIY Sunrise Simulator Combines Microchips, LEDs, and Laser Cut Goodness

    - by Jason Fitzpatrick
    Sunrise simulators use a gradually brightening light to wake you in the morning. Check out this creative build that combines a microprocessor, addressable LEDs, and a nifty laser-cut bracket to yield a polished and wall-mountable alarm clock lamp. Courtesy of NYC-based tinker Holly, the project features a detailed build guide that references all the other projects that inspired her sunrise simulator. Hit up the link below to check out everything from her laser cut shade brackets to the Adafruit module she used to control the light timing. Sunrise Lamp Alarm Clock [via Make] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Unable to keep the connecting using a wireless bridge

    - by dan
    I am running Ubuntu 12.04 on a dell inspiron desktop (core 2 duo) and am using wicd to manage my network/wifi. I've found that the WiFi card in the machine has trouble staying connected to my router (I believe this is a function of distance between the two), so I've taken an old Belkin F5d7231 wireless router and installed dd-wrt on it to use as a wireless bridge hoping that it will have better reception. I think everything up through the wireless bridge is working OK since I have no problems accessing the internet through it with my MacBook. The problem arises when I try to hook the ubuntu machine up to the wireless bridge. It will connect for a few minutes, but it will quickly disconnect without clear triggering event; it may be more likely to disconnect if there is a heavy traffic load going over it (could be something as simple as "cat big_text_file" in an ssh session). I've tried switching from dhclient to dhcpcd without much improvement. Here is the output from the syslog when it connects: Jun 30 17:10:08 Chicabuntu dhcpcd[28278]: wlan1: dhcpcd not running Jun 30 17:10:08 Chicabuntu dhcpcd[28278]: wlan1: exiting Jun 30 17:10:08 Chicabuntu dhcpcd[28312]: eth0: dhcpcd not running Jun 30 17:10:08 Chicabuntu dhcpcd[28312]: eth0: exiting Jun 30 17:10:08 Chicabuntu avahi-daemon[1041]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 30 17:10:08 Chicabuntu avahi-daemon[1041]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:10:08 Chicabuntu avahi-daemon[1041]: Withdrawing address record for fe80::21c:c4ff:fe31:1a83 on eth0. Jun 30 17:10:08 Chicabuntu kernel: [15184.976127] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:10:08 Chicabuntu kernel: [15185.010805] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:10:08 Chicabuntu dhcpcd[28347]: eth0: dhcpcd not running Jun 30 17:10:08 Chicabuntu dhcpcd[28347]: eth0: exiting Jun 30 17:10:08 Chicabuntu kernel: [15185.180156] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:10:08 Chicabuntu kernel: [15185.212785] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:10:10 Chicabuntu kernel: [15187.027445] tg3 0000:3f:00.0: eth0: Link is up at 100 Mbps, full duplex Jun 30 17:10:10 Chicabuntu kernel: [15187.027452] tg3 0000:3f:00.0: eth0: Flow control is on for TX and on for RX Jun 30 17:10:10 Chicabuntu kernel: [15187.028300] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: dhcpcd 3.2.3 starting Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: hardware address = 00:1c:c4:31:1a:83 Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: DUID = 00:01:00:01:17:81:85:79:00:1c:c4:31:1a:83 Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: broadcasting for a lease Jun 30 17:10:11 Chicabuntu avahi-daemon[1041]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:10:11 Chicabuntu avahi-daemon[1041]: New relevant interface eth0.IPv6 for mDNS. Jun 30 17:10:11 Chicabuntu avahi-daemon[1041]: Registering new address record for fe80::21c:c4ff:fe31:1a83 on eth0.*. Jun 30 17:10:20 Chicabuntu kernel: [15197.568016] eth0: no IPv6 routers present Jun 30 17:10:29 Chicabuntu dhcpcd[28353]: eth0: offered 192.168.1.111 from 192.168.1.254 Jun 30 17:10:29 Chicabuntu dhcpcd[28353]: eth0: checking 192.168.1.111 is available on attached networks Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: leased 192.168.1.111 for 86400 seconds Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: adding IP address 192.168.1.111/24 Jun 30 17:10:30 Chicabuntu avahi-daemon[1041]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.1.111. Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: adding default route via 192.168.1.254 metric 0 Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: exiting Jun 30 17:10:30 Chicabuntu avahi-daemon[1041]: New relevant interface eth0.IPv4 for mDNS. Jun 30 17:10:30 Chicabuntu avahi-daemon[1041]: Registering new address record for 192.168.1.111 on eth0.IPv4. Jun 30 17:10:30 Chicabuntu dhcpcd.sh: interface eth0 has been configured with new IP=192.168.1.111 Jun 30 17:10:39 Chicabuntu ntpdate[28439]: adjust time server 91.189.94.4 offset 0.001915 sec And here is the syslog from when it shuts down the connection without reason: Jun 30 17:12:15 Chicabuntu kernel: [15312.575455] tg3 0000:3f:00.0: eth0: Link is down Jun 30 17:12:16 Chicabuntu dhcpcd[28603]: eth0: sending signal 1 to pid 28361 Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: received SIGHUP, releasing lease Jun 30 17:12:16 Chicabuntu dhcpcd[28603]: eth0: exiting Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Withdrawing address record for 192.168.1.111 on eth0. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.1.111. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: removing default route via 192.168.1.254 metric 0 Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Withdrawing address record for fe80::21c:c4ff:fe31:1a83 on eth0. Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: netlink: No such process Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: removing IP address 192.168.1.111/24 Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: netlink: Cannot assign requested address Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: exiting Jun 30 17:12:16 Chicabuntu dhcpcd.sh: interface eth0 has been brought down Jun 30 17:12:17 Chicabuntu kernel: [15313.612141] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:12:17 Chicabuntu kernel: [15313.644703] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:12:17 Chicabuntu dhcpcd[28674]: wlan1: dhcpcd not running Jun 30 17:12:17 Chicabuntu dhcpcd[28674]: wlan1: exiting Jun 30 17:12:17 Chicabuntu dhcpcd[28708]: eth0: dhcpcd not running Jun 30 17:12:17 Chicabuntu dhcpcd[28708]: eth0: exiting Jun 30 17:12:17 Chicabuntu kernel: [15313.912147] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:12:17 Chicabuntu kernel: [15313.944746] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:12:18 Chicabuntu kernel: [15315.592569] tg3 0000:3f:00.0: eth0: Link is up at 100 Mbps, full duplex Jun 30 17:12:18 Chicabuntu kernel: [15315.592576] tg3 0000:3f:00.0: eth0: Flow control is on for TX and on for RX Jun 30 17:12:18 Chicabuntu kernel: [15315.593399] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 30 17:12:20 Chicabuntu avahi-daemon[1041]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:12:20 Chicabuntu avahi-daemon[1041]: New relevant interface eth0.IPv6 for mDNS. Jun 30 17:12:20 Chicabuntu avahi-daemon[1041]: Registering new address record for fe80::21c:c4ff:fe31:1a83 on eth0.*. Jun 30 17:12:29 Chicabuntu kernel: [15325.680019] eth0: no IPv6 routers present If this isn't useful, I can also post the wicd log, but that is kind of long. If anyone could help me I would be eternally grateful.

    Read the article

  • Why are part-time jobs in programming an anomality?

    - by Mikle
    I've recently quit my full time developing job at mega-corp, and I decided that I'll look for a part time job. Since then I've talked to half a dozen potential employers, and every one of them had the same reaction when I said the magic words "part-time" - they all closed up and became suspicious. Now, I understand that it might just be me, so as control I asked every one of them what if I were willing to work full time, and they all said I would probably get an offer. My question is two fold: Why, as an employer, would you give up a competent, even great, developer, simply because he wants to work 3 days a week and not 5? How do I sell the story of part time job better? I usually just list my reasons which are that I prefer that balance currently in my life and that I want to work on my own projects, but it leaves them even more suspicious - am I going to start something myself and quit? Am I just lazy?

    Read the article

  • Software Life-cycle of Hacking

    - by David Kaczynski
    At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security. I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing. I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless. I imagine the life-cycle would be something like: Find gap in security Exploit gap in security Procure payload Utilize payload I propose the following questions: What kind of formal definitions (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?

    Read the article

  • Telerik is First to Announce Support for Microsoft Silverlight Analytics Framework

    Yesterday at MIX 10 conference Microsoft announced the Microsoft Silverlight Analytics Framework Beta. The Silverlight Analytics Framework (SAF) is a new open-source framework to allow designers and developers to integrate web analytics into Silverlight applications in a consistent manner. Supporting out-of-browser and offline scenarios, Microsoft built this framework in conjunction with a number of web analytics services and control vendors to support multiple analytics services simultaneously without degrading application performance. Because the SAF is enabled as a set of behaviors in Microsoft Expression Blend, designers and developers can visually instrument their designs and configure A/B testing rapidly without writing any code. Telerik is proud to be the first control vendor to support the Silverlight Analytics Framework. RadControls for Silverlight can be used with the framework out of the box. The suite offers Silverlight Analytics Framework handlers and behavior, helping developers to fine tune the values sent to the analytics providers. Because the analytics framework is using the Managed Extensibility Framework (MEF) for composition, you don't need to change the way you use the controls to benefit from the Telerik handlers. Just add a reference to the Telerik assemblies that contains the handlers. Here is the code that you need to declare to use RadTreeView: <UserControl x:Class="Telerik.SLAF.MainPage"         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"         xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"         xmlns:ga="clr-namespace:Google.WebAnalytics;assembly=Google.WebAnalytics"         xmlns:sa="clr-namespace:Microsoft.WebAnalytics.Behaviors;assembly=Microsoft.WebAnalytics.Behaviors"         xmlns:ic="clr-namespace:Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions"         xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"         xmlns:telerikNavigation="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation">        <Grid x:Name="LayoutRoot">         <i:Interaction.Behaviors>             <ga:GoogleAnalytics ProfileId="--Your GA ProfileId" Category="Demo" />         </i:Interaction.Behaviors>         <telerikNavigation:RadTreeView>             <i:Interaction.Triggers>                 <i:EventTrigger EventName="SelectionChanged">                     <sa:TrackAction />                 </i:EventTrigger>             </i:Interaction.Triggers>             <telerikNavigation:RadTreeViewItem Header="Item1">             </telerikNavigation:RadTreeViewItem>             <telerikNavigation:RadTreeViewItem Header="Item2" />             <telerikNavigation:RadTreeViewItem Header="Item3" />         </telerikNavigation:RadTreeView>     </Grid> </UserControl> Download the Telerik Microsoft Silverlight Analytics Framework Handlers and the sample project. This is our first Beta release - please drop us a line with any feedback you have or even better if you are at MIX10 - come visit us at the booth in the "Commons" hall so we can discuss it in person. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SANS Webcast: Label Based Access Controls in Oracle Database 11g

    - by Troy Kitch
    Controlling access to data subsets within an application table can be difficult and inefficient especially when faced with specific data ownership, consolidation and multi-tenancy requirements. However, this can be elegantly addressed using label based access control (LBAC). In this webcast you will learn how LBAC using Oracle Label Security and Oracle Database 11g can easily enforce row-level access based on user security clearance. In addition, Oracle security experts will discuss real world case studies demonstrating how customers, in industries ranging from retail to government, are relying on Oracle Label Security for virtual information partitioning and secure consolidation of information.  Register for the July 12 webcast now.

    Read the article

  • New code base, what experiences/recommendations do you have?

    - by hlovdal
    I will later this year start on a project (embedded hardware, C, small company) where I believe that most (if not all) code will be new. So what experiences do you have to share as advice to starting a new code base? What have you been missing in projects that you have been working on? What has worked really well? What has not worked? Let's limit this question to be about things that relate directly to the code (e.g "banning the use of gets()": in scope, version control: border line, build system: out of scope).

    Read the article

  • Translating error messages from an external API?

    - by Jan Fabry
    If I am localizing a piece of software that uses an external API, how should I handle error messages that originate in this API? I do not control the API, I only consume it. The error responses are not very structured: some contain error codes, some contain verbose details in the text, others almost nothing. Some errors can be fixed by the user (incorrect configuration), some are caused by the external service (server overload), some could be caused by a bug in my software (of course, this would be very unlikely...). I would like to provide a smooth experience to my end-users, so they know what went wrong and what they can do to fix it. What is the best strategy to use here? (This is a generalization of a question from the WordPress Stack Exchange. I thought it would be worth re-asking here, because it is not limited to WordPress plugins.)

    Read the article

  • Oracle University Nouveaux cours (Week 42)

    - by rituchhibber
    Nouveaux cours Parmi les nouveautés d'Oracle Université de ce mois-ci, vous trouverez : Database Oracle Enterprise Manager Cloud Control 12c: Install & Upgrade (Training On Demand) MySQL Performance Tuning (Training On Demand) Fusion Middleware Oracle GoldenGate 11g Fundamentals for Oracle (4 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle WebCenter Portal 11g: Build Portals with Spaces (3 days) Business Intelligence Oracle BI 11g R1: Create Analyses and Dashboards (4 days) SOA & BPM SOA Adoption and Architecture Fundamentals (3 Days) eBusiness Suite R12 Oracle Using and Maintaining Approvals Management - Self-Study Course R12 Oracle HRMS Advanced Benefits Fundamentals - Self-Study Course WebLogic Oracle WebLogic Server 11g: Monitor and Tune Performance (Training On Demand) Oracle WebLogic Server 11g: Administration Essentials Self-Study Course (in French) Financial Oracle Project Financial Planning 11.1.2: Create Projects ( 3 days) Tuxedo Oracle Tuxedo 12c: Application Administration (5 days) Java Java SE 7: The Platform Evolves - Self-Study Course Primevera Primavera Client/Server Partner Trainer Course - Self-Study Course Primavera Progress Reporter 8.2 - Self-Study Course Contacter l' équipe locale d' Oracle University pour toute information et dates de cours.

    Read the article

  • Effective template system

    - by Alex
    I'm building a content management system, and need advice on which theming structure should I adopt. A few options (This is not a complete list): Wordpress style: the controller decides what template to load based on the user request, like: home page / article archive / single article page etc. each of these templates are unrelated to other templates, and must exist within the theme the theme developer decides if (s)he want to use inner-templates (like "sidebar", "sidebar item"), and includes them manually where (s)he thinks are needed. Drupal style: the controller gives control to the theme developer only to inner-templates; if they don't exist it falls back internally to some default templates (I find this very restrictive) Funky style: the controller only loads a "index.php" template and provides the theme developer conditional tags, which he can use to include inner-templates if (s)he wants. Among these styles, or others what style of template system allows for fast development and a more concise design and implementation.

    Read the article

  • How can I disable Hibernate completely?

    - by Lekensteyn
    I have seen the answer on How to disable hibernating?, but I have no such file. Possibly because that suggestion was written for Ubuntu, not Kubuntu (KDE, not Gnome). I do not have a swap on my encrypted SSD, my system freezes (cannot even change Caps Lock) if I accidentally press the "Hibernate" button at "Energy management". My keyboard has a Hibernate button (Fn + F4) next to the volume control buttons and every time I press the wrong key, the system will freeze after. So, what is the correct way to disable it? If there is no solution, a work-around is welcome too.

    Read the article

  • Week in Geek: Rogue Antivirus Caught Using AVG Logo Edition

    - by Asian Angel
    This week we learned how to quickly cut a clip from a video file with Avidemux, “tile windows, remote control a desktop using an iOS device, taking advantage of Windows 7 libraries”, turn a home Ubuntu PC into a LAMP web server, enable desktop notifications for Gmail in Chrome, “what image channels are and what they mean”, and more Latest Features How-To Geek ETC How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7 HTC Home Brings HTC’s Weather Widget to Your Windows Desktop Apps Uninstall Batch Removes Android Applications

    Read the article

  • Should I use a Class or Dictionary to Store Form Values

    - by Shamim Hafiz
    I am working on a C# .NET Application, where I have a Form with lots of controls. I need to perform computations depending on the values of the controls. Therefore, I need to pass the Form values to a function and inside that function, several helper functions will be called depending on the Control element. Now, I can think of two ways to pass all the Form values: i) Save everything in a Dictionary and pass the Dictionary to the function or ii) Have a class with attributes that corresponds to each of the Form element. Which of these two approaches , or any other, is better?

    Read the article

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Is paying programmers to "test" for bugs normal? [on hold]

    - by user106277
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. I also wanted them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I have never heard of this, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period... Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Welcome to new blog!! Agile.NAV

    - by ssmantha
    I am quite ecstatic to announce a new blog, to which I am also a co-author. http://agilenav.wordpress.com. Agile.NAV brings in a vast amount of information of the work I did together with my colleague on bringing Microsoft Dynamics NAV under the hood of Team Foundation Server. For the past couple of years we have been working on creating development tools (more on integration side) for Microsoft Dynamics NAV which includes, Version Control, Automated Build system and our new automation testing integration with Dynamics NAV 2013. To start of with we got very good initial responses from community’s distinguished members like Luc van Vugt (see here). The idea is to drive the shift in mind-set for the Microsoft Dynamics NAV developer community. We share the same passion as people like Luc, about creating software in a professional manner.

    Read the article

  • Automating custom software installation in a zone

    - by mgerdts
    In Solaris 11, the internals of zone installation are quite different than they were in Solaris 10.  This difference allows the administrator far greater control of what software is installed in a zone.  The rules in Solaris 10 are simple and inflexible: if it is installed in the global zone and is not specifically excluded by package metadata from being installed in a zone, it is installed in the zone.  In Solaris 11, the rules are still simple, but are much more flexible:  the packages you tell it to install and the packages on which they depend will be installed. So, where does the default list of packages come from?  From the AI (auto installer) manifest, of course.  The default AI manifest is /usr/share/auto_install/manifest/zone_default.xml.  Within that file you will find:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data> So, the default installation will install pkg:/group/system/solaris-small-server.  Cool.  What is that?  You can figure out what is in the package by looking for it in the repository with your web browser (click the manifest link), or use pkg(1).  In this case, it is a group package (pkg:/group/), so we know that it just has a bunch of dependencies to name the packages that really wants installed. $ pkg contents -t depend -o fmri -s fmri -r solaris-small-server FMRI compress/bzip2 compress/gzip compress/p7zip ... terminal/luit terminal/resize text/doctools text/doctools/ja text/less text/spelling-utilities web/wget If you would like to see the entire manifest from the command line, use pkg contents -r -m solaris-small-server. Let's suppose that you want to install a zone that also has mercurial and a full-fledged installation of vim rather than just the minimal vim-core that is part of solaris-small-server.  That's pretty easy. First, copy the default AI manifest somewhere where you will edit it and make it writable. # cp /usr/share/auto_install/manifest/zone_default.xml ~/myzone-ai.xml # chmod 644 ~/myzone-ai.xml Next, edit the file, changing the software_data section as follows:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>                 <name>pkg:/developer/versioning/mercurial</name>                <name>pkg:/editor/vim</name>             </software_data> To figure out  the names of the packages, either search the repository using your browser, or use a command like pkg search hg. Now we are all ready to install the zone.  If it has not yet been configured, that must be done as well. # zonecfg -z myzone 'create; set zonepath=/zones/myzone' # zoneadm -z myzone install -m ~/myzone-ai.xml A ZFS file system has been created for this zone. Progress being logged to /var/log/zones/zoneadm.20111113T004303Z.myzone.install Image: Preparing at /zones/myzone/root. Install Log: /system/volatile/install.15496/install_log AI Manifest: /tmp/manifest.xml.XfaWpE SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: myzone Installation: Starting ... Creating IPS image Installing packages from: solaris origin: http://localhost:1008/solaris/54453f3545de891d4daa841ddb3c844fe8804f55/ DOWNLOAD PKGS FILES XFER (MB) Completed 169/169 34047/34047 185.6/185.6 PHASE ACTIONS Install Phase 46498/46498 PHASE ITEMS Package State Update Phase 169/169 Image State Update Phase 2/2 Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 531.813 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/myzone/root/var/log/zones/zoneadm.20111113T004303Z.myzone.install Now, for a few things that I've seen people trip over: Ignore that bit about man pages - it's wrong.  Man pages are already installed so long as the right facet is set properly.  And that's a topic for another blog entry. If you boot the zone then just use zlogin myzone, you will see that services you care about haven't started and that svc:/milestone/config:default is starting.  That is because you have not yet logged into the console with zlogin -C myzone. If the zone has been booted for more than a very short while when you first connect to the zone console, it will seem like the console is hung.  That's not really the case - hit ^L (control-L) to refresh the sysconfig(1M) screen that is prompting you for information.

    Read the article

  • Ruby on Rails background API polling

    - by Matthew Turney
    I need to integrate a free/busy calendar integration with Zimbra. Unlike outlook, it seems, Zimbra requires polling their API. I need to be able to grab the free/busy data in background tasks for 10's of thousands of users on a regular time interval, preferably every few minutes. What would be the best way to implement this in a Rails application without bogging down our current resque tasks? I have considered moving this process to something like node.js or something similar in Ruby. The biggest problem is that we have no control over the IO, as each clients Zimbra instances could be slow and we don't want to create a huge backup in tasks. Thoughts and ideas?

    Read the article

  • Can we set up svn server on a local computer without any network access?

    - by Aitezaz Abdullah
    I want to set up an SVN repository on my computer without any network access. I am working on a code without any collaborator, so I don't want it to be publicly available. I read the following post. http://stackoverflow.com/questions/6001445/local-source-control-repository-cross-platform but this post suggests using online svn repository services that give free repositories. In that case, my code will be publicly available (as is included in the terms of free plans). So I was wondering if I can set up a local server on my windows xp machine that only I access even when I don't have any internet connection?

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >