Search Results

Search found 54118 results on 2165 pages for 'default value'.

Page 596/2165 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Mapping Repeating Sequence Groups in BizTalk

    - by Paul Petrov
    Repeating sequence groups can often be seen in real life XML documents. It happens when certain sequence of elements repeats in the instance document. Here’s fairly abstract example of schema definition that contains sequence group: <xs:schemaxmlns:b="http://schemas.microsoft.com/BizTalk/2003"            xmlns:xs="http://www.w3.org/2001/XMLSchema"            xmlns="NS-Schema1"            targetNamespace="NS-Schema1" >  <xs:elementname="RepeatingSequenceGroups">     <xs:complexType>       <xs:sequencemaxOccurs="1"minOccurs="0">         <xs:sequencemaxOccurs="unbounded">           <xs:elementname="A"type="xs:string" />           <xs:elementname="B"type="xs:string" />           <xs:elementname="C"type="xs:string"minOccurs="0" />         </xs:sequence>       </xs:sequence>     </xs:complexType>  </xs:element> </xs:schema> And here’s corresponding XML instance document: <ns0:RepeatingSequenceGroupsxmlns:ns0="NS-Schema1">  <A>A1</A>  <B>B1</B>  <C>C1</C>  <A>A2</A>  <B>B2</B>  <A>A3</A>  <B>B3</B>  <C>C3</C> </ns0:RepeatingSequenceGroups> As you can see elements A, B, and C are children of anonymous xs:sequence element which in turn can be repeated N times. Let’s say we need do simple mapping to the schema with similar structure but with different element names: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Gamma>C2</Gamma> </ns0:Destination> The basic map for such typical task would look pretty straightforward: If we test this map without any modification it will produce following result: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Alpha>A2</Alpha>  <Alpha>A3</Alpha>  <Beta>B1</Beta>  <Beta>B2</Beta>  <Beta>B3</Beta>  <Gamma>C1</Gamma>  <Gamma>C3</Gamma> </ns0:Destination> The original order of the elements inside sequence is lost and that’s not what we want. Default behavior of the BizTalk 2009 and 2010 Map Editor is to generate compatible map with older versions that did not have ability to preserve sequence order. To enable this feature simply open map file (*.btm) in text/xml editor and find attribute PreserveSequenceOrder of the root <mapsource> element. Set its value to Yes and re-test the map: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Alpha>A3</Alpha>  <Beta>B3</Beta>  <Gamma>C3</Gamma> </ns0:Destination> The result is as expected – all corresponding elements are in the same order as in the source document. Under the hood it is achieved by using one common xsl:for-each statement that pulls all elements in original order (rather than using individual for-each statement per element name in default mode) and xsl:if statements to test current element in the loop:  <xsl:templatematch="/s0:RepeatingSequenceGroups">     <ns0:Destination>       <xsl:for-eachselect="A|B|C">         <xsl:iftest="local-name()='A'">           <Alpha>             <xsl:value-ofselect="./text()" />           </Alpha>         </xsl:if>         <xsl:iftest="local-name()='B'">           <Beta>             <xsl:value-ofselect="./text()" />           </Beta>         </xsl:if>         <xsl:iftest="local-name()='C'">           <Gamma>             <xsl:value-ofselect="./text()" />           </Gamma>         </xsl:if>       </xsl:for-each>     </ns0:Destination>  </xsl:template> BizTalk Map editor became smarter so learn and use this lesser known feature of XSLT 2.0 in your maps and XSL stylesheets.

    Read the article

  • What equipment do real ISP's use?

    - by Allanrbo
    In a dormitory of 550 residents, people often mistakenly set up DHCP servers for the whole network by plugging in their private Wi-Fi routers wrongly. Also recently, someone mistakenly configured their PC to a static IP being the same as that of the default gateway. We use cheap 3Com switches at the moment. I know that Cisco switches support DHCP snooping to solve the DHCP problem, but that still does not solve the default gateway IP takeover problem. What sort of switch equipment do real ISP's use so their customers cannot break the network for the other customers? What we ended up doing In case anyone are courious, we ended up doing seperate VLANs for each user. And as a matter of fact, not just the 550 users, but for 2500 users (11 dorms). Here's a page describing the setup: http://k-net.dk/technicalsetup/ (the section "Transparent firewall using VLANs"). There was no significant load on the router server as I feared in one of the comments below. Even at 800Mpbs.

    Read the article

  • Puppet class inheritance confusion

    - by EMiller
    I've read the documentation on scope, but I'm still having trouble working this out. I've got two environments that are very similar - so I've got: modules/django-env/manifests/init.pp class django-env { package { "python26": ensure => installed } # etc ... } import "er.pp" modules/django-env/manifests/er.pp $venvname = "er" $venvpath = "/home/django/virtualenvs" class er { file { "$venvpath/$venvname" : ensure => directory } # etc ... } class er-dev { include er } class er-bce-dev { $venvname = "er-bce" include er } manifests/modules.pp import "django-env" manifests/nodes.pp node default { # etc ... } node 'centos-dev' imports default { include django-env include er-bce-dev include er-dev } The result here is that the "inheritance" works - but only the first "er-" item under the 'centos-dev' node is acted upon, I either get er-bce-dev or er-dev, but not both. There must be some basic thing I'm misunderstanding here. Is it the difference between import and include ? (not sure I understand that)

    Read the article

  • Nvidia dual monitor configuration gets lost every time I reboot

    - by sunwukung
    I've recently updated (well, borked then completely reinstalled) to 12.04. I'm running a dual monitor setup, with a Dell U2410 / Dell 2007WFP combination on an HP Elite Book 8560W. The graphics card is an NVIDIA GF108 [Quadro 1000M]. My problem is as follows. I can get the dual monitor setup working fine, but every time I reboot, my machine appears to lose the settings (specifically, the U2410 is disabled, the mouse pointer is locked in the launcher). I have to restate the settings after every launch. I've tried running nvidia-settings as sudo, I've save the changes to my xorg.conf file (see below) but nothing seems to be sticking. Has anyone had similair issues, or know of a fix? Conf file follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 2007WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro 1000M" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "CRT: 1680x1050 +1920+0, DFP-1: 1920x1200 +0+0; CRT: nvidia-auto-select +0+0, DFP-1: NULL" SubSection "Display" Depth 24 EndSubSection EndSection The error message I'm getting is this: none of the selected modes were compatible with the possible modes: Trying modes for CRTC 642: CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 0) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1) CRTC 642: trying mode 3600x1080@50hz with output at 1280 x 1024@0Hz (pass 1)

    Read the article

  • How to recover from terminal custom setup

    - by linq
    I installed ubuntu 12.04 and I added "Terminal" to the launcher bar, the default size of the terminal is a bit small, after some googling here is how I made the disaster change: At HUD menu, type "preference" and I see the option of Edit > preference, this is as I expected After I clicked the option, I forget the exact steps, but somehow I came to some configuration panel and there is a checkbox for terminal "custom" and I checked it now an input box is enabled and I type gnome-terminal --geometry 160x50 Now the problem comes, whenever I click the terminal button at the launcher bar, new terminal windows pops up endlessly from everywhere, I cannot do anything any more except logout or shutdown. The weird thing is, after I come back, I cannot get that Edit > preference any more when I type "preference" in HUD, I tried "Edit", "preference". To be clear, I still can use the computer as long as I do not click that terminal button, but I really want to use terminal to run commands. Help is appreciated greatly! Update - OK, figured out. Go to /home/jibin/.gconf/apps/gnome-terminal/profiles/Default and open that xml file, I can see the options I changed at the GUI, remove them and terminal runs good. The preference HUD is back also, it is actually Profile Preference and it is only available when terminal is open. Continue my ubuntu-ing...

    Read the article

  • What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12

    - by Tony Berk
    There is plenty to SEE for CRM during OpenWorld in San Francisco, September 30 - October 4! As I mentioned in my earlier post about some of the keynote sessions, Is There a Cloud Over OpenWorld?, I'm going try to highlight some key sessions to help you find the best sessions for you. Interested to find out where Oracle CRM products are headed, then find your "roadmap" session. Here are some of the sessions in the CRM Track that you might want to consider attending for products you currently own or might consider for the future. I think you'll agree, there is quite a bit of investment going on across Oracle CRM. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM - 12:45 PM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM - 4:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM - 11:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Siebel CRM Overview, Strategy, and Roadmap (CON9700) - Oct 1, 12:15PM - 1:15PM. The world’s most complete CRM solution, Oracle’s Siebel CRM helps organizations differentiate their businesses. Come to this session to learn about the Siebel product roadmap and how Oracle is committed to accelerating the pace of innovation and value for its customers on this platform. Additionally, the session covers how Siebel customers can leverage many Oracle assets such as Oracle WebCenter Sites; InQuira, RightNow, and ATG/Endeca applications, and Oracle Policy Automation in conjunction with their current Siebel investments. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15 AM - 12:15 PM. Social is changing the customer experience! Come find out how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle’s social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Oracle CRM On Demand Strategy and Roadmap (CON9727) - Oct 1, 10:45AM - 11:45AM. Oracle CRM On Demand is a powerful cloud-based customer relationship management solution. Come to this session to learn directly from Oracle experts about future product plans and hear how Oracle is committed to accelerating the pace of innovation and value to its customers. Knowledge Management Roadmap and Strategy (CON9776) - Oct 1, 12:15PM - 1:15PM. Learn how to harness the knowledge created as a natural byproduct of day-to-day interactions to lower costs and improve customer experience by delivering the right answer at the right time across channels. This session includes an overview of Oracle’s product roadmap and vision for knowledge management for both the Oracle RightNow and Oracle Knowledge (formerly InQuira) product families. Oracle Policy Automation Roadmap: Supercharging the Customer Experience (CON9655) - Oct 1, 12:15PM - 1:15PM. Oracle Policy Automation delivers rapid customer value by streamlining the capture, analysis, and deployment of policies across every facet of the customer experience. This session discusses recent Oracle Policy Automation enhancements for policy analytics; the latest Oracle Policy Automation Connector for Siebel; and planned new capabilities, including availability with the Oracle RightNow product line. There is much more, so stay tuned for more highlights or check out the Content Catalog and search for your areas of interest. Which session are you most interested in? Make your suggestions! But no voting for Pearl Jam or Kings of Leon. Those are after hours! 

    Read the article

  • Liskov principle: violation by type-hinting

    - by Elias Van Ootegem
    According to the Liskov principle, a construction like the one below is invalid, as it strengthens a pre-condition. I know the example is pointless/nonsense, but when I last asked a question like this, and used a more elaborate code sample, it seemed to distract people too much from the actual question. //Data models abstract class Argument { protected $value = null; public function getValue() { return $this->value; } abstract public function setValue($val); } class Numeric extends Argument { public function setValue($val) { $this->value = $val + 0;//coerce to number return $this; } } //used here: abstract class Output { public function printValue(Argument $arg) { echo $this->format($arg); return $this; } abstract public function format(Argument $arg); } class OutputNumeric extends Output { public function format(Numeric $arg)//<-- VIOLATION! { $format = is_float($arg->getValue()) ? '%.3f' : '%d'; return sprintf($format, $arg->getValue()); } } My question is this: Why would this kind of "violation" be considered harmful? So much so that some languages, like the one I used in this example (PHP), don't even allow this? I'm not allowed to strengthen the type-hint of an abstract method but, by overriding the printValue method, I am allowed to write: class OutputNumeric extends Output { final public function printValue(Numeric $arg) { echo $this->format($arg); } public function format(Argument $arg) { $format = is_float($arg->getValue()) ? '%.3f' : '%d'; return sprintf($format, $arg->getValue()); } } But this would imply repeating myself for each and every child of Output, and makes my objects harder to reuse. I understand why the Liskov principle exists, don't get me wrong, but I find it somewhat difficult to fathom why the signature of an abstract method in an abstract class has to be adhered to so much stricter than a non-abstract method. Could someone explain to me why I'm not allowed to hind at a child class, in a child class? The way I see it, the child class OutputNumeric is a specific use-case of Output, and thus might need a specific instance of Argument, namely Numeric. Is it really so wrong of me to write code like this?

    Read the article

  • Why is Samba Access from Windows So Slow?

    - by swalker2001
    I have set up a file server using Ubuntu 12.04 Server. The purpose is to serve several network drives to Windows users that have heretofore been served by numerous NAS drives. I have Samba set up with one share defined so far. I can connect to it fine from my test Windows 7 and Windows XP machines. When I do a directory listing on the share from Windows, it can take up to two minutes to get all the files listed--would have taken about 1.5 seconds when I was using the Buffalo NAS. Sometimes it times out with no response at all. I have used the default smb.conf and simply added the following for the share I have set up so far: [engineering] comment = Ubuntu File Server Share path = /networkdriveshares/engineering browsable = yes guest ok = yes read only = no create mask = 0755 I have tried changing the workgroup setting to the Active Domain name our Windows computer use but didn't notice any difference. The only other change I made to the default smb.conf was adding in the recommended socket settings: SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY Lots of information about slow Samba shares online but I have tried all of the solutions I have found and none have made a lick of difference. If there is no solution, is there a better way to set up a file server to be used by Windows clients?

    Read the article

  • null values vs "empty" singleton for optional fields

    - by Uko
    First of all I'm developing a parser for an XML-based format for 3D graphics called XGL. But this question can be applied to any situation when you have fields in your class that are optional i.e. the value of this field can be missing. As I was taking a Scala course on coursera there was an interesting pattern when you create an abstract class with all the methods you need and then create a normal fully functional subclass and an "empty" singleton subclass that always returns false for isEmpty method and throws exceptions for the other ones. So my question is: is it better to just assign null if the optional field's value is missing or make a hierarchy described above and assign it an empty singleton implementation?

    Read the article

  • Ubuntu 12.04 connected to wireless network but internet not working

    - by A.J.
    I can connect to my house's wireless network just fine, but when I'm connected I can't browse the web. Firefox starts connecting to a site and then just poops out. This doesn't happen on my roommates' computers (running Windows) or on our 3DSes, so I know it's just my laptop. I already tried sudo dhclient -r sudo dhclient sudo ifconfig eth0 down sudo ifconfig eth0 up Results of a few commands I was asked to run in comments: ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. ^C --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (173.194.33.38) 56(84) bytes of data. --- google.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1006ms nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 88:AE:1D:6B:4E:E7 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: off - Device: wlan0 [JUSTICE] ----------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: connected Default: yes HW Address: 1C:65:9D:65:C6:31 Capabilities: Speed: 1 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) HOME-9B18: Infra, 00:26:F3:53:9B:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 34 WPA WPA2 cougdad48 Network: Infra, 60:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 22 WPA2 cougdad48 Guest Network: Infra, 66:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA2 belkin.ade: Infra, 94:44:52:FF:8A:DE, Freq 2457 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 *JUSTICE: Infra, 00:24:01:7B:9F:7E, Freq 2462 MHz, Rate 54 Mb/s, Strength 88 WEP CenturyLink: Infra, B2:B2:DC:8E:E2:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 IPv4 Settings: Address: 192.168.0.11 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 (JUSTICE is my home's network.) ping -c 2 198.168.0.1 PING 198.168.0.1 (198.168.0.1) 56(84) bytes of data. --- 198.168.0.1 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms

    Read the article

  • Subterranean IL: Volatile

    - by Simon Cooper
    This time, we'll be having a look at the volatile. prefix instruction, and one of the differences between volatile in IL and C#. The volatile. prefix volatile is a tricky one, as there's varying levels of documentation on it. From what I can see, it has two effects: It prevents caching of the load or store value; rather than reading or writing to a cached version of the memory location (say, the processor register or cache), it forces the value to be loaded or stored at the 'actual' memory location, so it is then immediately visible to other threads. It forces a memory barrier at the prefixed instruction. This ensures instructions don't get re-ordered around the volatile instruction. This is slightly more complicated than it first seems, and only seems to matter on certain architectures. For more details, Joe Duffy has a blog post going into the details. For this post, I'll be concentrating on the first aspect of volatile. Caching field accesses To demonstrate this, I created a simple multithreaded IL program. It boils down to the following code: .class public Holder { .field public static class Holder holder .field public bool stop .method public static specialname void .cctor() { newobj instance void Holder::.ctor() stsfld class Holder Holder::holder ret }}.method private static void Main() { .entrypoint // Thread t = new Thread(new ThreadStart(DoWork)) // t.Start() // Thread.Sleep(2000) // Console.WriteLine("Stopping thread...") ldsfld class Holder Holder::holder ldc.i4.1 stfld bool Holder::stop call instance void [mscorlib]System.Threading.Thread::Join() ret}.method private static void DoWork() { ldsfld class Holder Holder::holder // while (!Holder.holder.stop) {} DoWork: dup ldfld bool Holder::stop brfalse DoWork pop ret} If you compile and run this code, you'll find that the call to Thread.Join() never returns - the DoWork spinlock is reading a cached version of Holder.stop, which is never being updated with the new value set by the Main method. Adding volatile to the ldfld fixes this: dupvolatile.ldfld bool Holder::stopbrfalse DoWork The volatile ldfld forces the field access to read direct from heap memory, which is then updated by the main thread, rather than using a cached copy. volatile in C# This highlights one of the differences between IL and C#. In IL, volatile only applies to the prefixed instruction, whereas in C#, volatile is specified on a field to indicate that all accesses to that field should be volatile (interestingly, there's no mention of the 'no caching' aspect of volatile in the C# spec; it only focuses on the memory barrier aspect). Furthermore, this information needs to be stored within the assembly somehow, as such a field might be accessed directly from outside the assembly, but there's no concept of a 'volatile field' in IL! How this information is stored with the field will be the subject of my next post.

    Read the article

  • Powershell Active Directory Account Attribute to a variable

    - by Bill Garrett
    Sorry for the newbie question. I am using Powershell 3 to get a list of all user accounts. I am trying to generate an output for accounts, either "Enabled" or "Disabled". I am able to get the account status code from active directory using: $rc = $Rech.PropertiesToLoad.Add("userAccountControl"); That will display the correct account status code. When I try to use an if statement on the value, I dont get any result. How do I put this value into a variable to use some logic with it? In the end, my requirements are to have the output to an CSV file that I can send to HR and have them examin it and instead of a code I would like to have it say "Enabled" or "Disabled". Thank you.

    Read the article

  • My parked domain was de-indexed by Google - what to do?

    - by Programmer Joe
    I have a question about how to handle my domain. In a nutshell, I bought a domain last year from Go Daddy. My intention was to launch a real site with this domain and I have spent the last year working on my site. For the last year, I have been using the default Go Daddy page display for an up and coming site. When I first bought this site, it was indexed by Google - you could search for "alphabanter" and my site would show up on the search result page for Google. Several months ago, it seemed Google de-indexed my domain and if you type "alphabanter," my domain no longer shows up on the list of search results. However, if you search for "www.alphabanter.com", that's the only way it shows up in the search results for Google. Anyways, I am about to launch my site for real. However, I don't quite know if I can get my site back into Google's index. I have a few questions: 1) Was my domain permanently penalized by Google and removed from their index just because it was a parked domain? I don't believe I have done anything abusive other than using the Go Daddy default page for almost a year because my site was not ready. 2) Should I just launch my site, put a few backlinks to my site, and hope that Google indexes my site again? 3) Should I submit my site to Google at Google submit your content I assume getting Google to reconsider my site is the last option if none of the above works.

    Read the article

  • Site to site VPN using RRAS from an untrusted network?

    - by DrZaiusApeLord
    Our remote office will be moving to a new space where internet will be provided. They'll be behind a router doing NAT (I do not have admin rights to this router). They will be sharing a printer with the other people on the LAN, but will need VPN to our network for email and file shares. I was thinking of just having them run the windows VPN client and connecting via PPTP like they do when they are off-site, but I have read that multiple PPTP connections from the same NAT'd address to the same destination doesn't work well or at all. I am thinking some kind of site-to-site VPN is needed so there is just one tunnel. Can I just put in a VPN gateway, set it to connect to our RRAS/PPTP server, and have them use it as their default gateway? Perhaps even use the local default gateway for internet traffic. If so, what VPN gateway/device is recommended for this? Or other solutions? Thanks.

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 3-5, 2012 in Milan, Italy: Register here. Endeca Information Discovery plays a key role with your big data analysis and complements Oracle Business Intelligence Solutions such as OBIEE. This FREE hands-on workshop for Oracle Partners highlights technical know-how of the product and helps understand its value proposition. We will walk you through four key components of the product: Oracle Endeca Server—A highly scalable, search-analytical database that derives the data model based on the data presented to it, thereby reducing data modeling requirements. Studio—A highly interactive, component-based user interface for configuring advanced, yet intuitive, analytical applications. Integration Suite—Provides rapid unification and enrichment of diverse sources of information into a single integrated view. Extensible Value-Added Modules—Add-on modules that provide value quickly through configuration instead of custom coding. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. Lab Outline The labs showcase Oracle Endeca Information Discovery components and functionality by providing expertise on features and know-how of building such applications. The hands-on activities are based on a Quick Start application provided during the class. Audience Oracle Partners, Big Data Analytics Developer and Architects BI and EPM Application Developers and Implementers, Data Warehouse Developers Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware 8GB RAM is highly recommended (Windows 64 bit Machine is required) 40 GB free space (includes staging) USB 2.0 port (at least one available) Software One of the following operating systems: 64-bit Windows host/laptop OS (Windows 7 or Windows Server 2008) 64-bit host/laptop OS with a Windows VM (Server, or Win 7, BIC2g, etc.) Internet Explorer 8.x , Firefox 3.6 or Firefox 6.0 WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle Endeca Information Discovery Workshop Register here: October 3-5, 2012: Cinisello Balsamo, Milan.  We will confirm with you your place within 2 weeks. Questions?  Send email to: [email protected]  :  Oracle Platform Technologies Enablement Services.

    Read the article

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

  • Can not set Windows XP Keyboard configuration

    - by Bluebird75
    A friend of mine came to me with a french Windows XP where windows had decided that keyboard would now be english-US keyboard instead of french keyboard. After a few attempts, I came to the strange conclusion that it's impossible to change it. Whatever keyboard configuration I apply, the language dialog does not complain but sticks to the english-us keyboard. I tried : setting two keyboards, with FR as default setting two keyboards, with EN-US as default one keyboard as FR It's like it was impossible to apply a new keyboard configuration. Any idea what could be wrong ?

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • The JDEdwards EnterpriseOne PreSales University

    - by Julien Haye
    Istanbul NOV 5-9 Wednesday, NOV 7 - It is raining outside and I am sitting in my hotel room (#106) in Istanbul and create my first blog entry. Today this blog was enabled and I am excited to have the ability to share my (first) thoughts with the EMEA JDE Partner Community. I am here in Istanbul because we are currently running the JDEdwards PreSales University Event series. This PreSales University is an established event series which we deliver the fifth time now and the first time in the ECEMEA region. Delegates value the openness and competence from the Product Strategy and Product Development Team from Denver and India. Together with the regional Oracle PreSales team we had very valuable discussions around product features and functions and about the business value of the new delivered applications and tools. Additionally the event provides endless opportunities to exchange ideas with other JD Edwards Partner and the Oracle PreSales Team. With its focus on sharing and learning, best practice, user experience and transforming technologies, delegates will leave this event with an abundance of new ideas and best practices to try for your coming projects and existing customer implementations. A day out of the office gives delegates a chance to gain a new perspective on their business processes. Everybody sees better ways of working just by being immersed in an environment where the focus is on using products more effectively. Apps Track: Highly concentrated participants in Istanbul listening to Jeff Erickson presenting the news about OneView Reporting. Jeff: We believe “The things you said”. The event is organized into two tracks, one for Apps and one for Tech. Everybody was able to learn new features and functions and how to position this products. The focus was on the new Apps release 9.1 and Tools Release 9.1.2 and their Value Propositions. For all topics hands-on exercises has been given to the participants. Even very experienced senior consultants did learn a lot from this event. In total we have 55 people registered and we still have some more content to deliver. By the way: Istanbul is a nice place to be. I already booked my next trip to this beautiful city. In two weeks we deliver the JD Edwards EECIS Executive Forum again in Istanbul. Once again a tough Agenda. I will let you know if I had the ability to have a walk outside and see a bit more of this beautiful city. At least I expect to have a different room number. Many greetings Hartmut WieseOracle Alliances & Channels EMEA

    Read the article

  • Ubuntu-installer fails preseed configuration file

    - by user76171
    I try to install Ubuntu 12.04 over network unattended. I installed a DHCP server (Dnsmasq), a TFTP server (tftpd-hpa), I got the netboot.tar.gz archive with the pxelinux.0 file, the pxelinux.cfg directory, the linux kernel and the initrd.gz image and I put a preseed file into my web server. Dnsmasq, tftpd-hpa, pxelinux and Apache are all on the same machine. The PCs MB doesn’t support PXE, so I use iPXE and boot it from a CD. The PC gets an IP from the DHCP, then iPXE loads #pxelinux.cfg/default, which I edited like this: timeout 5 prompt 0 default install label install kernel ubuntu-installer/i386/linux append vga=normal locale=en_GB setup/layoutcode=sl_SI console-setup/layoutcode=sl_SI netcfg/choose_interface=auto initrd=ubuntu-installer/i386/initrd.gz netcfg/get_hostname=ubuntux preseed/url=#http://192.168.10.10/ins/preseed.cfg Then it loads the linux kernel and the initrd.gz image. Then I got a question: Detect keyboard layout? I desided to bother with this later. So I answer No, and then twice on Englishjust to get trough and then I get to the error: The installer failed to process the preconfiguration file from #http://192.168.10.10/ins/preseeed.cfg. The file may be corrupt. I created the file myself and copied the d-I commands into it. I also tried to get the preseed.cfg over a web browser and it works fine. So why is the installer failing?

    Read the article

  • Suppress log messages about 3ware disk temperature changes on CentOS?

    - by Stefan Lasiewski
    I have a number of CentOS 5 servers which use 3ware RAID controllers. These servers are bugging my team with messages about minor temperature changes, like this: Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_01], SMART Usage Attribute: 194 Temperature_Celsius changed from 119 to 118 Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_03], SMART Usage Attribute: 194 Temperature_Celsius changed from 122 to 121 How can I suppress these messages? According to man smartd.conf : To disable any of the 3 reports, set the corresponding limit to 0. Trailing zero arguments may be omitted. By default, all temperature reports are disabled (´-W 0´). On my systems, smartd is reporting about temperature changes by default. I tried a manual approach. In /etc/smartd.conf, I have the following: /dev/twa0 -d 3ware,1 -a -W 0 /dev/twa0 -d 3ware,3 -a -W 0 But this still does not suppress the messages. Since these messages show up in /var/log/messages, LogWatch is sending unnecessary emails every night.

    Read the article

  • opening adobe reader results in infinite explorer.exe process creation loop

    - by irrational John
    First, apologies if the answer to this is only a Google away. I tried, honest I did. But I wasn't able to find anything about this problem posted elsewhere. I'm using Adobe Reader v9.3.2 in Windows 7 Home Premium 64-bit. If you want more system details, then just request them. What happens is that when I attempt to open a PDF by clicking "Open" on it then (1) adobe reader never opens and (2) the explorer.exe program is (apparently) recursively opened. I base this on opening the Task Manager and seeing a long list of explorer.exe processes under the "Processes" tab. Usually there is only one. When I recreate this problem, the list of explorer.exe processes are at least a page or two long. (Too many to bother counting). I "correct" this problem by logging off and then logging back on. This kills all the explorer.exe tasks. Unfortunately I don't know another way to terminate them all. Now here's the curious part. This only happens when I attempt to "Open" a PDF file. If instead I use the context menu (right mouse click on the PDF) and select "Open with" and "Adobe Reader 9.3" then Adobe Reader opens the file with no problem. It seems that there is something wrong with the setting for the default open action for PDF files. However, I have been unable to fix this by changing the Windows setting. Here is what I have tried. When I open Control Panel > All Control Panel Items > Default Programs > Set Associations I do not find an entry for file type .pdf. There are only entries for .pdfxml and .pdx. When use "Open with" on a PDF file and select "Choose default program", the check box for "Always use the selected program to open this kind of file" is disabled (greyed out). I have uninstalled and reinstalled Adobe Reader but the problem persists. While obviously no lives are at stake here, this problem is annoying the frickin' heck out of me. If I forget and recreate this bug then I have to stop everything I'm doing to stop it. Any suggestions on how I might go about fixing this?

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • What's wrong with my ext4 partition?

    - by bumbling fool
    What is wrong with this picture? Top is output from "df -h", bottom is gparted. I suspect I'm missing a lot of free space. No problems other than that (yet). Can somebody suggest the best (non-destructive) way to correct this? sudo dumpe2fs -h /dev/sda3: (source http://pastebin.com/nAvrdT4E) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 9f6eff64-60d7-4eec-81d5-1e8acd818b38 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 1602496 Block count: 6406144 Reserved block count: 320306 Free blocks: 4842284 Free inodes: 1361222 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1022 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8176 Inode blocks per group: 511 RAID stride: 32692 Flex block group size: 16 Filesystem created: Sun Nov 8 18:18:13 2009 Last mount time: Tue Mar 1 01:04:27 2011 Last write time: Mon Feb 28 04:27:34 2011 Mount count: 16 Maximum mount count: 28 Last checked: Thu Feb 24 06:23:39 2011 Check interval: 15552000 (6 months) Next check after: Tue Aug 23 07:23:39 2011 Lifetime writes: 227 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 268015 Default directory hash: half_md4 Directory Hash Seed: cc101517-e617-482b-a883-a72919419c84 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x001d3000 Journal start: 7787 fdisk and parted output per requests: http://pastebin.com/EGVH7Ken

    Read the article

  • BizTalk Send Ports, Delivery Notification and ACK / NACK messages

    - by Robert Kokuti
    Recently I worked on an orchestration which sent messages out to a Send Port on a 'fire and forget' basis. The idea was that once the orchestration passed the message to the Messagebox, it was left to BizTalk to manage the sending process. Should the send operation fail, the Send Port got suspended, and the orchestration completed asynchronously, regardless of the Send Port success or failure. However, we still wanted to log the sending success, using the ACK / NACK messages. On normal ports, BizTalk generates ACK / NACK messages back to the Messagebox, if the logical port's Delivery Notification property is set to 'Transmitted'. Unfortunately, this setting also causes the orchestration to wait for the send port's result, and should the Send Port fail, the orchestration will also receive a 'DeliveryFailureException' exception. So we may end up with a suspended port and a suspended orchestration - not the outcome wanted here, there was no value in suspending the orchestration in our case. There are a couple of ways to fix this: 1. Catch the DeliveryFailureException  (full type name Microsoft.XLANGs.BaseTypes.DeliveryFailureException) and do nothing in the orchestration's exception block. Although this works, it still slows down the orchestration as the orchestration still has to wait for the outcome of the send port operation. 2. Use a Direct Port instead, and set the ACK request on the message Context, prior passing to the port: msgToSend(BTS.AckRequired) = true; This has to be done in an expression shape, as a Direct logical port does not have Delivery Notification property - make sure to add a reference to Microsoft.BizTalk.GlobalPropertySchemas. Setting this context value in the message will cause the messaging agent to create an appropriate ACK or NACK message after the port execution. The ACK / NACK messages can be caught and logged by dedicated Send Ports, filtering on BTS.AckType value (which is either ACK or NACK). ACK/NACK messages are treated in a special way by BizTalk, and a useful feature is that the original message's context values are copied to the ACK/NACK message context - these can be used for logging the right information. Other useful context properties of the ACK/NACK messages: -  BTS.AckSendPortName can be used to identify the original send port. - BTS.AckOwnerID, aka http://schemas.microsoft.com/BizTalk/2003/system-properties.AckOwnerID - holds the instance ID of the failed Send Port - can be used to resubmit / terminate the instance Someone may ask, can we just turn off the Delivery Notification on a 'normal' port, and set the AckRequired property on the message as for a Direct port. Unfortunately, this does not work - BizTalk seems to remove this property automatically, if the message goes through a port where Delivery Notification is set to None.

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >