Search Results

Search found 97840 results on 3914 pages for 'custom data type'.

Page 827/3914 | < Previous Page | 823 824 825 826 827 828 829 830 831 832 833 834  | Next Page >

  • prevent filesystem from entering read-only mode

    - by user788171
    I have found that my server's filesystem is continuously entering read-only mode. There have been some issues with the raid1 array, but I have removed the bad disk from the array. However, it is still physically plugged into the system because I haven't had a chance to go over to the datacentre, I suspect udev and the system kernel is still picking up the bad disk and throwing errors. In /var/log/messages, there are errors like this: Mar 2 06:53:14 nocloud kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x4010000 action 0xe frozen Mar 2 06:53:14 nocloud kernel: ata1: irq_stat 0x00400040, connection status changed Mar 2 06:53:14 nocloud kernel: ata1: SError: { PHYRdyChg DevExch } Mar 2 06:53:14 nocloud kernel: ata1: hard resetting link Mar 2 06:53:20 nocloud kernel: ata1: link is slow to respond, please be patient (ready=0) Mar 2 06:53:21 nocloud kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 2 06:53:21 nocloud kernel: ata1.00: configured for UDMA/133 Mar 2 06:53:21 nocloud kernel: ata1: EH complete This happens fairly randomly throughout the day until eventually the filesystem becomes read-only. When this happens, my system becomes non-operational which kind of defeats the purpose of having a raid1. Note, ata1 is the bad disk (I think ata1 corresponds to /dev/sda because they are both first in line). Under mdadm, /dev/sda1,2 is no longer being used, but I can't prevent the system kernel from continuing to query that disk when I am no longer using it and throwing these errors. Is there a way to prevent my filesystem from automatically going into read-only mode? Furthermore, is it safe to do so? Thanks in advance. EDIT: Additional information: output from cat /proc/mdstat md1 : active raid1 sdb2[1] 976554876 blocks super 1.1 [2/1] [_U] bitmap: 5/8 pages [20KB], 65536KB chunk md0 : active raid1 sdb1[1] 204788 blocks super 1.0 [2/1] [_U] Output from mount: /dev/mapper/VolGroup-LogVol00 on / type ext4 (rw,noatime) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/md0 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) EDIT2: pvdisplay output: --- Physical volume --- PV Name /dev/md1 VG Name VolGroup PV Size 931.32 GiB / not usable 2.87 MiB Allocatable yes (but full) PE Size 16.00 MiB Total PE 59604 Free PE 0 Allocated PE 59604

    Read the article

  • BIND split-view DNS config problem

    - by organicveggie
    We have two DNS servers: one external server controlled by our ISP and one internal server controlled by us. I'd like internal requests for foo.example.com to map to 192.168.100.5 and external requests continue to map to 1.2.3.4, so I'm trying to configure a view in bind. Unfortunately, bind fails when I attempt to reload the configuration. I'm sure I'm missing something simple, but I can't figure out what it is. options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; view "internal" { zone "example.com" { type master; notify no; file "/etc/bind/db.example.com"; }; }; zone "example.corp" { type master; file "/etc/bind/db.example.corp"; }; zone "100.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; }; I have excluded the entries in the view for allow-recursion and recursion in an attempt to simplify the configuration. If I remove the view and just load the example.com zone directly, it works fine. Any advice on what I might be missing?

    Read the article

  • Oracle OpenWorld Update: Demo Pods and Hands-on Labs

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Less than one week away until the start of Oracle OpenWorld 2012 and the Data Integration Solutions team is ready to go!  We have an exciting line up for you this year which we have summarized for you in the Oracle OpenWorld Focus on Data Integration Solutions document. In past posts we have discussed session themes and our customer panel, but today I would like to summarize our Hands-on Labs and Demo Pods that we have available for attendees. For Oracle GoldenGate Hands-On Labs we have two labs that we are running this year. Deep Dive into Oracle GoldenGate Thursday October 4th at 11:15AM in the Marriott Marquis Salon 1/2 Oracle GoldenGate provides real-time log-based change data capture and delivery between heterogeneous systems. It enables cost-effective, low-impact, real-time data integration and continuous availability solutions. This session covers Oracle GoldenGate 11g’s internal product architecture and includes a hands-on lab that covers configuration examples for target database instantiation and real-time change data capture and delivery. The participants will configure Oracle GoldenGate to instantiate a secondary database that can be used for disaster recovery or a reporting instance. Come learn how easy it is to use and how this can be a very valuable and easy technology solution for your organization. Introduction to Oracle GoldenGate Veridata Wednesday October 3rd 10:15AM in the Marriott Marquis Sales 1/2 Oracle GoldenGate Veridata compares one set of data with another and identifies data that is out of synchronization. In this hands-on lab, you will be introduced to the key features of this product. Using the Oracle GoldenGate Veridata Web client, you will have the opportunity to configure comparison objects and rules, initiate a comparison, review the status and output of a comparison, and review out-of-sync data. As a bonus this year, we have recorded the labs and made them available on youtube.com/oraclegoldengate. These will be available the day of the labs. Our demo pods are an opportunity for attendees to see our products but more so to meet the product management and development teams. I would like to point out that we have two Oracle GoldenGate 11gR2 demo pods, one in the database camp and the other in the middleware camp. The one in the middleware camp will be focused on all platforms while the one in the database camp will have a focus on the Oracle platform. The other two I would like to point out are the Monitoring Oracle GoldenGate and the Oracle Enterprise Manager demo pods; both of these pods will focus on methods to monitor GoldenGate but the OEM demo pod will have a specific focus on the Oracle GoldenGate Management Pack plug-in for OEM. Below is a list of our demo pods and their locations. Monitoring Oracle GoldenGate for End-to-End Visibility Moscone South, Right - S-241 Oracle Data Integrator and Oracle GoldenGate for Oracle Applications Moscone South, Right - S-240 Oracle GoldenGate 11gR2 New Features Moscone South, Right - S-239 Oracle GoldenGate 11gR2: Real-Time, Transactional Database Replication     Moscone South, Left - S-027 Oracle GoldenGate Veridata and Adapters Moscone South, Right - S-242 Oracle Enterprise Manager Moscone South, Left - S-040 Keep tuned to our blog during the show for news and highlights from the Data Integration Solutions team. See you there.

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Responding to the page unload in a managed bean

    - by frank.nimphius
    Though ADF Faces provides an uncommitted data warning functionality, developers may have the requirement to respond to the page unload event within custom application code, programmed in a managed bean. The af:clientListener tag that is used in ADF Faces to listen for JavaScript and ADF Faces client component events does not provide the option to listen for the unload event. So this often recommended way of implementing JavaScript in ADF Faces does not work for this use case. To send an event from JavaScript to the server, ADF Faces provides the af:serverListener tag that you use to queue a CustomEvent that invokes method in a managed bean. While this is part of the solution, during testing, it turns out, the browser native JavaScript unload event itself is not very helpful to send an event to the server using the af:serverListener tag. The reason for this is that when the unload event fires, the page already has been unloaded and the ADF Faces AdfPage object needed to queue the custom event already returns null. So the solution to the unload page event handling is the unbeforeunload event, which I am not sure if all browsers support them. I tested IE and FF and obviously they do though. To register the beforeunload event, you use an advanced JavaScript programming technique that dynamically adds listeners to page events. <af:document id="d1" onunload="performUnloadEvent"                      clientComponent="true"> <af:resource type="javascript">   window.addEventListener('beforeunload',                            function (){performUnloadEvent()},false)      function performUnloadEvent(){   //note that af:document must have clientComponent="true" set   //for JavaScript to access the component object   var eventSource = AdfPage.PAGE.findComponentByAbsoluteId('d1');   //var x and y are dummy variables obviously needed to keep the page   //alive for as long it takes to send the custom event to the server   var x = AdfCustomEvent.queue(eventSource,                                "handleOnUnload",                                {args:'noargs'},false);   //replace args:'noargs' with key:value pairs if your event needs to   //pass arguments and values to the server side managed bean.   var y = 0; } </af:resource> <af:serverListener type="handleOnUnload"                    method="#{UnloadHandler.onUnloadHandler}"/> // rest of the page goes here … </af:document> The managed bean method called by the custom event has the following signature:  public void onUnloadHandler(ClientEvent clientEvent) {  } I don't really have a good explanation for why the JavaSCript variables "x" and "y" are needed, but this is how I got it working. To me it ones again shows how fragile custom JavaScript development is and why you should stay away from using it whenever possible. Note: If the unload event is produced through navigation in JavaServer Faces, then there is no need to use JavaScript for this. If you know that navigation is performed from one page to the next, then the action you want to perform can be handled in JSF directly in the context of the lifecycle.

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • Replicating between Cloud and On-Premises using Oracle GoldenGate

    - by Ananth R. Tiru
    Do you have applications running on the cloud that you need to connect with the on premises systems. The most likely answer to this question is an astounding YES!  If so, then you understand the importance of keep the data fresh at all times across the cloud and on-premises environments. This is also one of the key focus areas for the new GoldenGate 12c release which we announced couple of week ago via a press release. Most enterprises have spent years avoiding the data “silos” that inhibit productivity. For example, an enterprise which has adopted a CRM strategy could be relying on an on-premises based marketing application used for developing and nurturing leads. At the same time it could be using a SaaS based Sales application to create opportunities and quotes. The sales and the marketing teams which use these systems need to be able to access and share the data in a reliable and cohesive way. This example can be extended to other applications areas such as HR, Supply Chain, and Finance and the demands the users place on getting a consistent view of the data. When it comes to moving data in hybrid environments some of the key requirements include minimal latency, reliability and security: Data must remain fresh. As data ages it becomes less relevant and less valuable—day-old data is often insufficient in today’s competitive landscape. Reliability must be guaranteed despite system or connectivity issues that can occur between the cloud and on-premises instances. Security is a key concern when replicating between cloud and on-premises instances. There are several options to consider when replicating between the cloud and on-premises instances. Option 1 – Secured network established between the cloud and on-premises A secured network is established between the cloud and on-premises which enables the applications (including replication software) running on the cloud and on-premises to have seamless connectivity to other applications irrespective of where they are physically located. Option 2 – Restricted network established between the cloud and on-premises A restricted network is established between the cloud and on-premises instances which enable certain ports (required by replication) be opened on both the cloud and on the on-premises instances and white lists the IP addresses of the cloud and on-premises instances. Option 3 – Restricted network access from on-premises and cloud through HTTP proxy This option can be considered when the ports required by the applications (including replication software) are not open and the cloud instance is not white listed on the on-premises instance. This option of tunneling through HTTP proxy may be only considered when proper security exceptions are obtained. Oracle GoldenGate Oracle GoldenGate is used for major Fortune 500 companies and other industry leaders worldwide to support mission-critical systems for data availability and integration. Oracle GoldenGate addresses the requirements for ensuring data consistency between cloud and on-premises instances, thus facilitating the business process to run effectively and reliably. The architecture diagram below illustrates the scenario where the cloud and the on-premises instance are connected using GoldenGate through a secured network In the above scenario, Oracle GoldenGate is installed and configured on both the cloud and the on-premises instances. On the cloud instance Oracle GoldenGate is installed and configured on the machine where the database instance can be accessed. Oracle GoldenGate can be configured for unidirectional or bi-directional replication between the cloud and on premises instances. The specific configuration details of Oracle GoldenGate processes will depend upon the option selected for establishing connectivity between the cloud and on-premises instances. The knowledge article (ID - 1588484.1) titled ' Replicating between Cloud and On-Premises using Oracle GoldenGate' discusses in detail the options for replicating between the cloud and on-premises instances. The article can be found on My Oracle Support. To learn more about Oracle GoldenGate 12c register for our launch webcast where we will go into these new features in more detail.   You may also want to download our white paper "Oracle GoldenGate 12c Release 1 New Features Overview" I would love to hear your requirements for replicating between on-premises and cloud instances, as well as your comments about the strategy discussed in the knowledge article to address your needs. Please post your comments in this blog or in the Oracle GoldenGate public forum - https://forums.oracle.com/community/developer/english/business_intelligence/system_management_and_integration/goldengate

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote hot me too

    - by dgerman
    See similar: Out of nowhere, ssh_exchange_identification: Connection closed by remote host Today, 6/19/12 attempting to ssh to the same host as usual ssh replied ssh_exchange_identification: Connection closed by remote host two additional attempts failed ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host ping host was successful, ftp host was successful, ssh now successful, ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'real-world-systems.com' is known and matches the RSA host key. debug1: Found key in /Users/dgerman/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dgerman/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/dgerman/.ssh/id_dsa debug1: Next authentication method: password ++++ What gives?? +++++++++++ Mac OS X 10.4.7 , OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011, /Users/dgerman/.ssh > ls -la total 24 drwx------ 7 dgerman staff 238 Jun 19 15:46 . drwxr-xr-x 389 dgerman staff 13226 Jun 19 15:46 .. -rw------- 1 dgerman staff 1766 Feb 26 18:25 id_rsa -rw-r--r-- 1 dgerman staff 400 Feb 26 18:25 id_rsa.pub -rw-r--r-- 1 dgerman staff 67 Feb 26 18:27 keyfingerprint -rw-r--r-- 1 dgerman staff 6215 May 1 08:11 known_hosts -rw-r--r-- 1 dgerman staff 220 Feb 26 18:26 randomart

    Read the article

  • CNAME to another domain fails on some office networks, why?

    - by crashalpha
    Our domain "aspenfasteners.com" is hosted by Volusion. We have CNAME records "find" and "search" which point to site indexing accounts on www.picosearch.com. These addresses fail on SOME private office networks which have their own DNS. We suspect the problem comes from Volusion's own name servers, n2.volusion.com and n3.volusion.com. Volusion support on problems this technical is non-existant. We have tried an NSLOOKUP on find.aspenfasteners.com with level 2 debugging info, and we got the results below. Is it possible that the local DNS is recursing to Volusion's name servers, and that while Volusion DOES return the canonical name, they do NOT resolve the address? Can anybody with expertise in this sort of stuff PLEASE look at the NSLOOKUP below and tell me if we are right, because Volusion is giving me absolutely NO support on this topic. I need proof of where the problem lies. Thanks VERY much! Carlo find.aspenfasteners.com Server: mtl-srm-dbsv-01.fastenerwholesale.com Address: 192.168.0.44 SendRequest(), len 61 HEADER: opcode = QUERY, id = 8, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN ------------ Got answer (138 bytes): HEADER: opcode = QUERY, id = 8, rcode = NXDOMAIN header flags: response, auth. answer, want recursion, recursion avail. questions = 1, answers = 0, authority records = 1, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN AUTHORITY RECORDS: -> fastenerwholesale.com type = SOA, class = IN, dlen = 46 ttl = 3600 (1 hour) primary name server = mtl-srm-dbsv-01.fastenerwholesale.com responsible mail addr = admin.fastenerwholesale.com serial = 10219 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ------------ SendRequest(), len 41 HEADER: opcode = QUERY, id = 9, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ------------ Got answer (141 bytes): HEADER: opcode = QUERY, id = 9, rcode = NXDOMAIN header flags: response, auth. answer questions = 1, answers = 1, authority records = 1, additional = 1 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ANSWERS: -> find.aspenfasteners.com type = CNAME, class = IN, dlen = 17 canonical name = www.picosearch.com ttl = 3600 (1 hour) AUTHORITY RECORDS: -> com type = SOA, class = IN, dlen = 43 ttl = 900 (15 mins) primary name server = ns3.volusion.com responsible mail addr = admin.volusion.com serial = 1 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ADDITIONAL RECORDS: -> ns3.volusion.com type = A, class = IN, dlen = 4 internet address = 65.61.137.154 ttl = 900 (15 mins) * mtl-srm-dbsv-01.fastenerwholesale.com can't find find.aspenfasteners.com: Non-existent domain

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Web Service Example - Part 3: Asynchronous

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2 What's different? In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases. How it was done 1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately.  2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy. 3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked. 4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued. Summary of Fundamental Changes In This Application The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • Issue using a "used" SSD as a Windows 8.1 Boot Drive

    - by EpiGrad
    So, I'm something of a Mac person, but decided to take a stab at this whole "build yourself a PC" thing - right now, the thing is assembled, posts just fine, and can get to the BIOS. The problem is the drive I want to use - I intended to use a 80 GB Corsair SSD I've had sitting around as the boot drive, and a new Samsung SSD for games and the like. So I boot using a Windows 8.1 install USB stick, and if the Samsung drive is plugged in, it happily offers to install Windows on it. The Corsair drive though, it's flipped out - I reformatted it as a blank NTFS drive (it was HFS for Mac purposes) and the BIOS can't see it, nor can the Windows installer. What's wrong, and how do I fix it? The tools at my disposal are: The current ASUS BIOS that came with my motherboard (a Z87I-Deluxe), a Mac running the latest OS X which can also boot to Windows 7 if needed via either Parallels or Bootcamp. Update 1: Update: Based on a friend's suggestion to switch SATA ports, Windows 8.1's installer can now see the drive as Drive 0, Partition 1, a 83.8 GB "Primary" partition. But when I click it and hit "Next", I get the following error: "We couldn't create a new partition or locate an existing one. For more information, see the Setup log files" - not that it gives any clue how to access those. Update 2: Following a trail of Google suggestions, I ended up going into advanced tools and just reformatting the drive as follows: Start DISKPART. Type LIST DISK and identify your SSD disk number (from 0 to n disks). Type SELECT DISK <n> where <n> is your SSD disk number. Type CLEAN Type CREATE PARTITION PRIMARY Type ACTIVE Type FORMAT FS=NTFS QUICK Type ASSIGN Type EXIT twice (one to get out of DiskPart, the other to exit the command line tool) Per these instructions. This goes well enough, but now I can select the disk for installation, and I get a new error: "Windows 8 cannot be installed to this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only be installed to GP disks." So, Googling that, I do the following: select disk 0 clean convert gpt exit ...and we might have fixed it. Windows is at least trying to install now.

    Read the article

  • Windows 7 re-installation

    - by GTX OC
    I need to reinstall Windows 7 but the problem is that all my partitions are set to Dynamic disk type.Windows cannot install on Dynamic disc type partitions.During the installation process there is an option to format the drive but I cannot change the disc type.Is there any way to convert the partitions back into primary so that I can re-install Windows?I am a complete fool.I don't even know why I converted the partition in which Windows was installed into dynamic type. I have a 1TB HDD having 4 partitions and all of them are dynamic. Thanks in advance.

    Read the article

  • Controller Action Methods with different signatures

    - by Narsil
    I am trying to get my URLs in files/id format. I am guessing I should have two Index methods in my controller, one with a parameter and one with not. But I get this error message in browser below. Anyway here is my controller methods: public ActionResult Index() { return Content("Index "); } // // GET: /Files/5 public ActionResult Index(int id) { File file = fileRepository.GetFile(id); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } Error: Server Error in '/' Application. The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Reflection.AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController] System.Web.Mvc.ActionMethodSelector.FindActionMethod(ControllerContext controllerContext, String actionName) +396292 System.Web.Mvc.ReflectedControllerDescriptor.FindAction(ControllerContext controllerContext, String actionName) +62 System.Web.Mvc.ControllerActionInvoker.FindAction(ControllerContext controllerContext, ControllerDescriptor controllerDescriptor, String actionName) +13 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +99 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Updated Code: public ActionResult Index(int? id) { if (id.HasValue) { File file = fileRepository.GetFile(id.Value); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } else return Content("Index"); } It's still not the thing I want. URLs have to be in files?id=3 format. I want files/3 routes from global.asax routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); //files/3 //it's the one I wrote routes.MapRoute("Files", "{controller}/{id}", new { controller = "Files", action = "Index", id = UrlParameter.Optional} ); I tried adding a new route after reading Jeff's post but I can't get it working. It still works with files?id=2 though.

    Read the article

  • VBA WinHTTPRequest and submitting forms

    - by Hazerider
    Hi. I spent all day yesterday trying to figure out how to submit a form using WinHTTPRequest. I can do it pretty easily with an InternetExplorer object, but the problem is that I need to save a PDF file that gets returned, and I am not sure how to do this with the IE object. Here is the relevant HTML code snippet: <div class="loginHome-left"> <fieldset> <h3>Log in Using</h3> <form> <label for="standardLogin" accesskey="s"> <input name="useLogin" id="standardLogin" value="standard" type="radio" checked="true">Standard Login</label> &nbsp; <label for="rsaSecurID" accesskey="r"> <input name="useLogin" value="rsaSecur" type="radio" id="rsaSecurID" onclick="redirectLogin('ct_logon_securid');return false;">RSA SecurID</label> &nbsp; <label for="employeeNTXP" accesskey="e"> <input name="useLogin" id="employeeNTXP" value="employee" type="radio" onclick="redirectLogin('ct_logon_external_nt');return false; "> Employee Windows Login<br></label> </form> <br> <div class="error">Error: ...</div><br> <form onSubmit="if(validate(this)) {formSubmit();} return false;" name="passwdForm" method="post" action="/UAB/ct_logon"> <input value="custom" name="pageId" type="hidden"> <input value="custom" name="auth_mode" type="hidden"> <input value="/UAB/ct_logon" name="ct_orig_uri" type="hidden"> <INPUT VALUE="" NAME="orig_url" TYPE="hidden"> <input value="" name="lpSp" type="hidden"> <label for="user"> <strong>Username</strong> </label> <input autocomplete="off" name="user" type="text" value="" class="txtFld" onkeypress="return handleEnter(this, event);"> <br> <label for="EnterPassword"> <strong>Password</strong>&nbsp;&nbsp;(<a tabindex="-1" href="/UAB/BCResetWithSecrets">Forgot Your Password?</a>) </label> <input autocomplete="off" name="password" type="password" class="txtFld" onkeypress="return handleEnter(this, event);"> <INPUT id="rememberLogin" name="lpCookie" type="checkbox"> <label for="rememberLogin">Remember My Login Information</label><br> </form> <div class="right"> <br> <input type="image" src="/BC_S/images/bclogin/btn_login.gif" name="" value="Submit" onClick="if(validate(document.forms['passwdForm'])){formSubmit();}return false;"> </div> <div class="clearfix"></div> </fieldset> </div> In order to log in through InternetExplorer, I do the following: Sub TestLogin() Dim ie As InternetExplorer, doc As HTMLDocument, form As HTMLFormElement, inp As Variant Set ie = New InternetExplorer ie.Visible = True ie.navigate "https://URL of the login page" Do Until ie.readyState = READYSTATE_COMPLETE Loop Set doc = ie.document For Each form In doc.forms If InStr(form.innerText, "Password") <> 0 Then form.elements("user").Value = "my_name" form.elements("password").Value = "my_password" Exit For Else End If Next 'This is the unnamed input with an image that is used to submit the form' doc.all(78).Click ie.navigate "https://url of the PDF" Do Until ie.readyState = READYSTATE_COMPLETE Loop Dim filename As String, filenum As Integer filename = "somefile.pdf" filenum = FreeFile Open filename For Binary Access Write As #filenum Write #filenum, doc.DocumentElement.innerText Close #filenum ie.Quit Debug.Print Set ie = Nothing End Sub What I really would like to do is something along the lines of the following: Sub TestLogin3() Dim whr As New WinHttpRequest, postData As String whr.Open "POST", "https://live.barcap.com/UAB/ct_logon", False whr.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" whr.setRequestHeader "Connection", "Keep-Alive" whr.Send whr.WaitForResponse postData = "user=paschom1&password=change01" 'Or the following?' postData = "user=paschom1&password=change01&orig_url=&pageId=custom&auth_mode=custom&ct_orig_uri=/BC/dispatcher&lpSp=&lpCookie=off" whr.Send postData whr.WaitForResponse Debug.Print whr.responseText End Sub It just refuses to work though. Not sure if I need to use more setRequestHeader with Content-Form or something similar, and if I do, not sure what exactly I am supposed to pass it. If anyone has any advice regarding this, it would be hugely appreciated. I could probably use a perl module to do it, but I would rather keep it all in VBA if possible. Thanks, Marc.

    Read the article

  • RAID 5 RECONSTRUCT with RAID Reconstructor

    - by user22914
    I have Dell Poweredge server 2600 with Raid 5 in 3 hard drive Scsi 36gb each, it was fail to boot sinc the third drive is offline. I attached Sata card adapter to Sata hard drive and install OS SERVER 2003 to it, downloaded drivers for Raid and everything goes fine when I use recovery data software called "GetDataBack" from here http://www.runtime.org/data-recovery-software.htm but the problem that not all data recovered, I am still looking for importants data with about 5 GB size. I have another software called "RAID Reconstructor" from "http://www.runtime.org/raid.htm" I thought if I run it to reconstruct that will help to recover more data and put the third drive to be Online but I am afraid that might erase the current data in the other drives. Please I need your advise in how I could retrive the remaining data? Thanks

    Read the article

  • jQuery for dynamic Add/Remove row function, it's clone() objcet cannot modify element name

    - by wcy0942
    I'm try jQuery for dynamic Add/Remove row function, but I meet some question in IE8 , it's clone() objcet cannot modify element name and cannot use javascript form (prhIndexed[i].prhSrc).functionKey, but in FF it works very well, source code as attachment, please give me a favor to solve the problem. <html> $(document).ready(function() { //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //Define some variables - edit to suit your needs //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // table id var _table = jQuery("#prh"); // modify here // tbody tbody var _tableBody = jQuery("tbody",_table); // buttons var _addRowBtn = jQuery("#controls #addRow"); var _insertRowBtn= jQuery("#controls #insertRow"); var _removeRowBtn= jQuery("#controls #removeRow"); //check box all var _cbAll= jQuery(".checkBoxAll", _table ); // add how many rows var _addRowsNumber= jQuery("#controls #add_rows_number"); var _hiddenControls = jQuery("#controls .hiddenControls"); var blankRowID = "blankRow"; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //click the add row button //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _addRowBtn.click(function(){ // when input not isNaN do add row if (! isNaN(_addRowsNumber.attr('value')) ){ for (var i = 0 ; i < _addRowsNumber.attr('value') ;i++){ var newRow = jQuery("#"+blankRowID).clone(true).appendTo(_tableBody) .attr("style", "display: ''") .addClass("rowData") .removeAttr("id"); } refreshTable(_table); } return false; //kill the browser default action }); //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //checkbox select all //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _cbAll.click(function(){ var checked_status = this.checked; var prefixName = _cbAll.attr('name'); // find name prefix match check box (group of table) jQuery("input[name^='"+prefixName+"']").each(function() { this.checked = checked_status; }); }); //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //Click the remove all button //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _removeRowBtn.click(function(){ var prefixName = _cbAll.attr('name'); // find name prefix match check box (group of table) jQuery("input[name^='"+prefixName+"']").not(_cbAll).each(function() { if (this.checked){ // remove tr row , ckbox name the same with rowid jQuery("#"+this.name).remove(); } }); refreshTable(_table); return false; }); //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //Click the insert row button //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _insertRowBtn.click(function(){ var prefixName = _cbAll.attr('name'); jQuery("input[name^='"+prefixName+"']").each(function(){ var currentRow = this.name;// ckbox name the same with rowid if (this.checked == true){ newRow = jQuery("#"+blankRowID).clone(true).insertAfter(jQuery("#"+currentRow)) .attr("style", "display: ''") .addClass("rowData") .removeAttr("id"); } }); refreshTable(_table); return false; }); //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //Function to refresh new row //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ function refreshTable(_table){ var tableId = _table.attr('id'); var count =1; // ignore hidden column // update tr rowid jQuery ( "#"+tableId ).find(".rowData").each(function(){ jQuery(this).attr('id', tableId + "_" + count ); count ++; }); count =0; jQuery ( "#"+tableId ).find("input[type='checkbox'][name^='"+tableId+"']").not(".checkBoxAll").each(function(){ // update check box id and name (not check all) jQuery(this).attr('id', tableId + "_ckbox" + count ); jQuery(this).attr('name', tableId + "_" + count ); count ++; }); // write customize code here customerRow(_table); }; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ //Function to customer new row : modify here //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ function customerRow(_table){ var form = document.myform; var pageColumns = ["prhSeq", "prhChannelproperty", "prhSrc"]; //modify here var tableId = _table.attr('id'); var count =1; // ignore hidden column // update tr rowid jQuery ( "#"+tableId ).find(".rowData").each(function(){ for(var i = 0; i < pageColumns.length; i++){ jQuery ( this ).find("input[name$='"+pageColumns[i]+"']").each(function(){ jQuery(this).attr('name', 'prhIndexed['+count+'].'+pageColumns[i] ); // update prhSeq Value if (pageColumns[i] == "prhSeq") { jQuery(this).attr('value', count ); } if (pageColumns[i] == "prhSrc") { // clear default onfocus //jQuery(this).attr("onfocus", ""); jQuery(this).focus(function() { // doSomething }); } }); jQuery ( this ).find("select[name$='"+pageColumns[i]+"']").each(function(){ jQuery(this).attr('name', 'prhIndexed['+count+'].'+pageColumns[i] ); }); }// end of for count ++; }); jQuery ( "#"+tableId ).find(".rowData").each(function(){ // only for debug alert ( jQuery(this).html() ) }); }; //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ }); <div id="controls"> <table width="350px" border="0"> <tr><td> <input id="addRow" type="button" name="addRows" value="Add Row" /> <input id="add_rows_number" type="text" name="add_rows_number" value="1" style="width:20px;" maxlength="2" /> <input id="insertRow" type="button" name="insert" value="Insert Row" /> <input id="removeRow" type="button" name="deleteRows" value="Delete Row" /> </td></tr> </table></div> <table id="prh" width="350px" border="1"> <thead> <tr class="listheader"> <td nowrap width="21"><input type="checkbox" name="prh_" class="checkBoxAll"/></td> <td nowrap width="32">Sequence</td> <td nowrap width="153" align="center">Channel</td> <td nowrap width="200">Source</td> </tr> </thead> <tbody> <!-- dummy row --> <tr id='blankRow' style="display:none" > <td><input type="checkbox" id='prh_ckbox0' name='prh_0' value=""/></td> <td align="right"><input type="text" name="prhIndexed[0].prhSeq" maxlength="10" value="" onkeydown="" onblur="" onfocus="" readonly="readonly" style="width:30px;background-color:transparent;border:0;line-height:13pt;color: #993300;background-color:transparent;border:0;line-height:13pt;color: #993300;"></td> <td><select name="prhIndexed[0].prhChannelproperty"><option value=""></option> <option value="A01">A01</option> <option value="A02">A02</option> <option value="A03">A03</option> <option value="A04">A04</option> </select></td> <td><input type="text" name="prhIndexed[0].prhSrc" maxlength="6" value="new" style="width:80px;background-color:#FFFFD7;"> <div id='displayPrhSrcName0'></div> </td> </tr> <!-- row data --> <tr id='prh_1' class="rowData"> <td><input type="checkbox" id='prh_ckbox1' name='prh_1' value=""/></td> <td align="right"><input type="text" name="prhIndexed[1].prhSeq" maxlength="10" value="1" onkeydown="" onblur="" onfocus="" readonly="readonly" style="width:30px;background-color:transparent;border:0;line-height:13pt;color: #993300;background-color:transparent;border:0;line-height:13pt;color: #993300;"></td> <td><select name="prhIndexed[1].prhChannelproperty"><option value=""></option> <option value="A01">A01</option> <option value="A02">A02</option> <option value="A03">A03</option> <option value="A04">A04</option> </select></td> <td><input type="text" name="prhIndexed[1].prhSrc" maxlength="6" value="new" style="width:80px;background-color:#FFFFD7;"> <div id='displayPrhSrcName0'></div> </td> </tr> <tr id='prh_2' class="rowData"> <td><input type="checkbox" id='prh_ckbox2' name='prh_2' value=""/></td> <td align="right"><input type="text" name="prhIndexed[2].prhSeq" maxlength="10" value="2" onkeydown="" onblur="" onfocus="" readonly="readonly" style="width:30px;background-color:transparent;border:0;line-height:13pt;color: #993300;background-color:transparent;border:0;line-height:13pt;color: #993300;"></td> <td><select name="prhIndexed[2].prhChannelproperty"><option value=""></option> <option value="A01">A01</option> <option value="A02">A02</option> <option value="A03">A03</option> <option value="A04">A04</option> </select></td> <td><input type="text" name="prhIndexed[2].prhSrc" maxlength="6" value="new" style="width:80px;background-color:#FFFFD7;"> <div id='displayPrhSrcName0'></div> </td> </tr> </tbody> </table>

    Read the article

  • Subclassing UINavigationBar ... how do I use it in UINavigationController?

    - by funkadelic
    Hi, I wanted to subclass UINavigationBar (to set a custom background image & text color) and use that for all the navigation bars in my app. Looking at the API docs for UINavigationController, it looks like navigationBar is read-only: @property(nonatomic, readonly) UINavigationBar *navigationBar Is there a way to actually use a custom UINavigationBar in my UIViewControllers? I know that other apps have done custom navigation bars, like flickr: Here is my UINavigationBar subclass: #import <UIKit/UIKit.h> @interface MyNavigationBar : UINavigationBar <UINavigationBarDelegate> { } @end the implementation #import "MyNavigationBar.h" @implementation MyNavigationBar - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)drawRect:(CGRect)rect { // override the standard background with our own custom one UIImage *image = [[UIImage imageNamed:@"navigation_bar_bgd.png"] retain]; [image drawInRect:rect]; [image release]; } #pragma mark - #pragma mark UINavigationDelegate Methods - (void)navigationController:(UINavigationController *)navigationController willShowViewController:(UIViewController *)viewController animated:(BOOL)animated{ // use the title of the passed in view controller NSString *title = [viewController title]; // create our own UILabel with custom color, text, etc UILabel *titleView = [[UILabel alloc] init]; [titleView setFont:[UIFont boldSystemFontOfSize:18]]; [titleView setTextColor:[UIColor blackColor]]; titleView.text = title; titleView.backgroundColor = [UIColor clearColor]; [titleView sizeToFit]; viewController.navigationItem.titleView = titleView; [titleView release]; viewController.navigationController.navigationBar.tintColor = [UIColor colorWithRed:0.1 green:0.2 blue:0.3 alpha:0.8]; } - (void)navigationController:(UINavigationController *)navigationController didShowViewController:(UIViewController *)viewController animated:(BOOL)animated{ } - (void)dealloc { [super dealloc]; } @end I know that I can use a category to change the background image, but i still want to be able to set the text color of the navigation bar title @implementation UINavigationBar (CustomImage) - (void)drawRect:(CGRect)rect { UIImage *image = [UIImage imageNamed: @"navigation_bar_bgd.png"]; [image drawInRect:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)]; } @end any suggestions or other solutions? I basically want to create a light background and dark text like Flickr's app navigation bars

    Read the article

  • JQuery UI function errors out: Object is not a property or method

    - by Luke101
    In the following code I get an error that says autocomplete function Object is not a property or method Here is the code: <title><%= ViewData["pagetitle"] + " | " + config.Sitename.ToString() %></title> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/minified/jquery.ui.core.min.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/minified/jquery.ui.core.min.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/jquery.ui.widget.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/jquery.ui.position.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/jquery.ui.autocomplete.js" type="text/javascript"></script> <script language="javascript" type="text/javascript" src="/Scripts/main.js"></script> <script language="javascript" type="text/javascript"> $(document).ready(function () { Categories(); $('#tags1').autocomplete({ //error here url: '/Tag/TagAutoComplete', width: 320, max: 4, delay: 30, cacheLength: 1, scroll: false, highlight: false }); }); </script>

    Read the article

  • Reuse security code between WCF and MVC.NET

    - by mrjoltcola
    First the background: I jumped into MVC.NET from the Java MVC world, so my implementation below is possibly cheating, I don't know. I avoided fooling with a custom membership provider and I just implemented the base code needed to authenticate and load roles in my LogOn action. Typically I just need to check roles programatically, and have no use for all of the other membership features, so I didn't originally think I needed a full Membership provider. I have a successful WCF project with a custom authentication and authorization layer that I did at least write per the proper API. I implemented it with custom IPrincipal, UserNamePasswordValidator and IAuthorizationPolicy classes to load from an Oracle database. In my WCF services, I use declarative security: [PrincipalPermission(SecurityAction.Demand, Role="ADMIN")]. The question (on the ASP.NET/MCV.NET side): All my reading indicates I should implement a custom Membership/Roles provider, and use [Authorize(Roles="ADMIN")] on my controller actions. At this point, I don't have a true Membership provider, but I'm using the same User class that implements the IPrincipal interface that works with the WCF security. I plan to share common code between the WCF and ASP.NET modules. So my LogOn action is not using the FormsService (and I assume this is bad). I had commented it out, and just used my "UserService" to access the Oracle db. Note my "TODO" comment below. public ActionResult LogOn(LogOnModel model, string returnUrl) { log.Info("Login attempt by " + model.UserName); if (ModelState.IsValid) { User user = userService.findByUserName(model.UserName); // Commented original MemberShipService code, this is probably bad // if (MembershipService.ValidateUser(model.UserName, model.Password)) if (user != null && user.Authenticate(model.Password) == true) { log.Info("Login success by " + model.UserName); FormsService.SignIn(model.UserName, model.RememberMe); // TODO: Override with Custom identity / roles? user.AddRoles(userService.listRolesByUser(user)); // pull in roles from db if (!String.IsNullOrEmpty(returnUrl)) return Redirect(returnUrl); else return RedirectToAction("Index", "Home"); } else { log.Info("Login failure by " + model.UserName); ModelState.AddModelError("", "The user name or password provided is incorrect."); } } // If we got this far, something failed, redisplay form return View(model); } So can I make the above work? Can I stick the IPrincipal (User) into the CurrentContext or HttpContext? Can I integrate the custom IPrincipal I've already created without writing a full Membership/Roles Provider? I currently stick the User object into the session and access it from all MVC.NET controllers with "CurrentUser" property which grabs it from the session on demand. But this doesn't work with the [Authorize] attribute; I assume that is because it knows nothing about my custom Principal in the session, and is instead using whatever FormsService.SignIn() produces. I also found that session timeouts screw up the login redirect, the user doesn't get forwarded, instead we get a null exception accessing User from the session, and I assume it is related to my "skipping steps" to get a quick implementation. Thanks.

    Read the article

  • How to remove Thumbnail property of a JPEG image without distrubing other Exif data in C ++ .net.

    - by Ravi shankar
    I have an application which edits the metadata part of the JPEG image. I have to remove the thumbnail metadata with out disturbing other metadata. I have tried out the code below but was not successful in removing thumbnail metadata. can some help me out in solving this query thanks in advance. array<String^>^ query = gcnew array<String^>(4); query[0] = "/app1/ifd/tiff:"; query[1] = "/app1/ifd/tiff/subifd:"; query[2] = "/ifd/tiff:"; query[3] = "/ifd/tiff/subifd:"; for each (String^ SetQuery in query) { metaData->RemoveQuery(SetQuery + "{uint=256}"); metaData->RemoveQuery(SetQuery + "{uint=257}"); metaData->RemoveQuery(SetQuery + "{uint=258}"); metaData->RemoveQuery(SetQuery + "{uint=259}"); metaData->RemoveQuery(SetQuery + "{uint=273}"); metaData->RemoveQuery(SetQuery + "{uint=262}"); metaData->RemoveQuery(SetQuery + "{uint=277}"); metaData->RemoveQuery(SetQuery + "{uint=278}"); metaData->RemoveQuery(SetQuery + "{uint=279}"); metaData->RemoveQuery(SetQuery + "{uint=282}"); metaData->RemoveQuery(SetQuery + "{uint=283}"); metaData->RemoveQuery(SetQuery + "{uint=284}"); metaData->RemoveQuery(SetQuery + "{uint=296}"); metaData->RemoveQuery(SetQuery + "{uint=513}"); metaData->RemoveQuery(SetQuery + "{uint=514}"); metaData->RemoveQuery(SetQuery + "{uint=529}"); metaData->RemoveQuery(SetQuery + "{uint=530}"); metaData->RemoveQuery(SetQuery + "{uint=531}"); metaData->RemoveQuery(SetQuery + "{uint=532}"); }

    Read the article

  • UINavigationController inside tabbar loading a child root view

    - by Doug
    Hi guys, Firstly i'll preface by saying that i am a complete Cocoa touch/objective c noob (.Net dev having a dabble) I have searched on Google as well as here but cannot seem to find an easy solution. I have a UItabbarcontroller view with a UINavigationController inside its first tab I have the root view for this UINavigationController stored in a seperate class and NIB as i am trying to seperate the data viewing from the data loading (i'm going to reuse the table list in multiple places in my database) and simply pass the root view its data using a loading method and have it take it from there. What i want to happen: App loads and loads the first view of the tab bar (A UINavigationController) The UINavigationController inside the first view loads a root view (a UIViewController with a table view) and sets its title The UINavigationController loads the data from a web service and parses it The UINavigationController sends the data to a loading method inside the UIViewController Am i thinking about this completely wrongly? What currently happens: the first tab bar loads with an empty uinavigationcontroller (no table view) the data methods fire and get the webservice data this child view gets sent its data using the loading method the tableview delegate events fail to fire inside the child view telling it to load the data into the table I just can't seem how to load my second view inside the root view of the navigation controller and then send it my data?

    Read the article

< Previous Page | 823 824 825 826 827 828 829 830 831 832 833 834  | Next Page >