Search Results

Search found 14610 results on 585 pages for 'session storage'.

Page 219/585 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Need help in displaying data insider marquee

    - by user59637
    Hi all, I want to display news inside the marquee markup in my banking application but its not happening.Please somebody help me what is the error in my code.Here is my code: <marquee bgcolor="silver" direction="left" id="marq1" runat="server" behavior="scroll" scrolldelay="80" style="height: 19px" width="565"> <% String se = Session["countnews"].ToString(); for (int i = 0; i < int.Parse("" +se); i++) { %> <strong><%Response.Write("&nbsp;&nbsp;" + Session["news"+i] + "&nbsp;&nbsp;"); %></strong> <% } %> </marquee> public class News { DataSet ds = new DataSet("Bank"); SqlConnection conn; String check; SqlDataAdapter sda; int i; public string News_Name; public int Count_News; public int newsticker() { conn = new SqlConnection(ConfigurationManager.ConnectionStrings["BankingTransaction"].ConnectionString.ToString()); check = "Select NewsTitle from News where NewsStatus = 'A'"; sda = new SqlDataAdapter(check, conn); sda.Fill(ds, "News"); if (ds.Tables[0].Rows.Count > 0) { for (i = 0; i < ds.Tables[0].Rows.Count; i++) { News_Name =i+ ds.Tables[0].Rows[i].ItemArray[0].ToString(); } Count_News = ds.Tables[0].Rows.Count; } else { News_Name =0+ "Welcome to WestSide Bank Online Web site!"; Count_News = 1; } return int.Parse(Count_News.ToString()); } protected void Page_Load(object sender, EventArgs e) { News obj = new News(); try { obj.newsticker(); Session["news"] = obj.News_Name.ToString(); Session["countnews"] = obj.Count_News.ToString(); } catch (SqlException ex) { Response.Write("Error in login" + ex.Message); Response.Redirect("Default.aspx"); } finally { obj = null; } }

    Read the article

  • Basic Hibernate Caching Question

    - by manyxcxi
    Doe Hibernate use cache (second level or otherwise) if all I am doing is batch inserts? No entities are being requested from the database, and no generators are used. Also, would StatelessSession vs Session change the answer? What if I was using a Session with a JDBC batch size of 50? The cache I will be using is Ehcache

    Read the article

  • memcached never expired in rails?

    - by pickerel
    It's very strange that the session will never expire if i use memcached store even i set config.action_controller.session :session_expires = 1.seconds.from_now And I use extended_fragment_cache to cache fragment, I meet the same problem <% Cache "my_page", {:expires = 1.minutes} do ... % never expired! Anyone know where's the problem?

    Read the article

  • Debugging php-cli scripts with xdebug and netbeans?

    - by wurdalack
    I have managed to initiate php-cli script debug session from the IDE itself, but I need to start the debugging session from the shell / command line. These are rather complex maintenance PHP scripts which take a lot of input parameters, so entering arguments from within Netbeans is a bit cumbersome. I have done it before with Zend studio http://kb.zend.com/index.php?View=entry&EntryID=130 but now I need to get it working with Netbeans. Thanks in advance.

    Read the article

  • Mounting Gluster Volumes

    - by Roman Newaza
    I have created Hosted Zone with 2 IP addresses of Gluster Cluster, both IP are returned by dig. After mounting Gluster, I cannot ls mount point as it takes long time. mount shows me it's mounted, but df doesn't. Finally, I have this: ls: cannot access /mnt/storage: Transport endpoint is not connected. But if I mount it with the one of the IP, no problem - volume contents is accessible OS: Ubuntu 11.10 GlusterFS: 3.2.6 Log: http://pastie.org/private/2jgp4h1hnqgzych3djtg I have can telnet storage from client - ports are open.

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • Software/FakeRAID: Windows 8 Disk Mirroring vs Intel Onboard

    - by Johnny W
    So Windows 8 is out and I have a new motherboard. I wish to create a RAID 1 coupling between two HDDs -- for storage purposes only (my OS is on an SSD) -- but I don't know which is the best route to take. My motherboard (Z77 chipset) comes with the age old Intel Fake RAID, but since I only wish to use my RAID for storage, I wondered if I might be better to use Windows 8 Disk Mirroring. Can anyone advise which is better? Or perhaps the pros and cons of each, if that's too contentious? I just can't see the benefit of FakeRAID. You can see my current setup here, if that might change things(?): Thanks!

    Read the article

  • usb device in dual mode on gentoo linux

    - by Idlecool
    i am having a flip flop usb modem which has two modes 1 usb mass storage mode: root@devbox:/media/F872F0FD72F0C184/Users/idlecool/Downloads# lsusb Bus 006 Device 003: ID 19d2:fff5 ONDA Communication S.p.A. 2 usbserial mode: root@devbox:/media/F872F0FD72F0C184/Users/idlecool/Downloads# lsusb Bus 006 Device 003: ID 19d2:fffe ONDA Communication S.p.A. by default whenever i plug the modem to the usb port.. the linux machine recognize it as a usb mass storage device.. how can i make it load as usbserial device i have been using a package usb_modeswitch in the past on ubuntu 10.04 but i cannt install the same package on gentoo live cd.. even udev is not installed on live cd.. how to change the product-id of the usb device on gentoo live disc without udev.

    Read the article

  • Do I have to chmod 777 my NFS folder when I share?

    - by luckytaxi
    Under Redhat, if I export a folder as an NFS mount, does the folder have to have RW for users/groups/others? Right now /storage/software is -rwxr-xr-x root/root i.e. /etc/exportfs /storage/software *(rw,sync) On my client, I can mount but I can't write. I'm using a regular user and NOT root. I think "no_root_squash" fixes it but I really don't want that. Then again, nor do I want to have to chmod 777 the folder on the server.

    Read the article

  • Best way to code this, string to map conversion in Groovy

    - by Daxon
    I have a string like def data = "session=234567893egshdjchasd&userId=12345673456&timeOut=1800000" I want to convert it to a map ["session", 234567893egshdjchasd] ["userId", 12345673456] ["timeout", 1800000] This is the current way I am doing it, def map = [:] data.splitEachLine("&"){ it.each{ x -> def object = x.split("=") map.put(object[0], object[1]) } } It works, but is there a more efficient way?

    Read the article

  • "Always available offline" option missing on one network drive in Windows 7

    - by Rynardt
    My network setup at home has 2 network storage devices on the network. The one is for media content on a Popcorn Hour A-110 and the other is a D-Link DNS-320 in RAID 1 configuration for business files. When I access these network drives and right click a folder the following context menu appears for the A-110 device, but not for the D-link. I have tested this in both Windows 7 32bit and Windows Vista 64bit. In both instances the "Always available offline" option is only available for the A-110 storage device, and not for the D-link. How do I get this option for the D-link? Any advice or ideas are welcome.

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • Criteria query returns hydrated object in SQLite but not SqlServer

    - by Berryl
    I have a method that returns a resource fully hydrated when the db is SQLite but when the identical code is used by SqlServer the object is not fully hydrated. I'll explain that with the code after some brief background. I my domain various otherwise unrelated things like an Employee or a Machine can be used as a Resource that can be allocated to. In the object model an example of this would be: /// <summary>Wraps a <see cref="StaffMember"/> in a <see cref="ResourceBase"/>. </summary> public class StaffMemberResource : ResourceBase { public virtual StaffMember StaffMember { get; private set; } public StaffMemberResource(StaffMember staffMember) { Check.RequireNotNull<StaffMember>(staffMember); base.BusinessId = staffMember.Number.ToString(); base.Name = staffMember.Name.ToString(); base.OrganizationName = staffMember.Department.Name; StaffMember = staffMember; } [UsedImplicitly] protected StaffMemberResource() { } } And in the db tables, there is a table per class inheritance where the ResourceBase has a discriminator and the id of the actual resource (ie, StaffMember) StaffMember - 1 ---- M- ResourceBase - 1 ----- M - Allocation The Code public override StaffMemberResource BuildResource(IActivityService activityService) { var sessionFactory = _GetSessionFactory(); var session = sessionFactory.GetCurrentSession(); StaffMemberResource result; using (var tx = session.BeginTransaction()) { var propertyName = ExprHelper.GetPropertyName<StaffMember>(x => x.Number); var staff = session.CreateCriteria<StaffMember>() .Add(Restrictions.Eq(propertyName, new EmployeeNumber(_testData.Resource_1.BusinessId))) .UniqueResult<StaffMember>(); if (staff == null) { ... build up a staff member result = new StaffMemberResource(staff); } else { ////////// var property = ExprHelper.GetPropertyName<StaffMemberResource>(x => x.StaffMember); result = session.CreateCriteria<StaffMemberResource>() .Add(Restrictions.Eq(property, staff)) .UniqueResult<StaffMemberResource>(); } /////////// tx.Commit(); } return result; } It's that second criteria query that works "properly" with SQLite but not with SqlServer. By properly I mean that the employee numer is translated into a ResourceBase.BusinessId, Name is flattened out into a ResourceBase.Name, etc. Does anyone know why this might be? Cheers, Berryl

    Read the article

  • SQLAlchemy returns tuple not dictionary

    - by Ivan
    Hi everyone, I've updated SQLAlchemy to 0.6 but it broke everything. I've noticed it returns tuple not a dictionary anymore. Here's a sample query: query = session.query(User.id, User.username, User.email).filter(and_(User.id == id, User.username == username)).limit(1) result = session.execute(query).fetchone() This piece of code used to return a dictionary in 0.5. My question is how can I return a dictionary?

    Read the article

  • Executing Password Change over Ruby Net-SSH

    - by tesmar
    Hi all, I am looking to execute a password change over Net-ssh and this code seems to hang: Net::SSH.start(server_ip, "user", :verbose => :debug ) do |session| session.process.popen3("ls") do |input, output, error| ["old_pass","test", "test"].each do |x| input.puts x end end end I know the connection works because using a simple exec I can get the output from ls on the remote server, but this hangs. Any ideas? The last message from debug is that the public key succeeded.

    Read the article

  • How to manage state in REST

    - by user317050
    I guess this question will sound familiar, but I am yet another programmer baffled by REST. I have a traditional web application which goes from StateA to StateB and so on If the user goes to (url of) StateB, I want to make sure that he has visited StateA before. Traditionally, I'd do this using session state. Since session state is not allowed in REST, how do I achieve this?

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Powershell profile "on exit" event?

    - by poke
    I'm looking for a way to automatically do some clean up tasks when the PowerShell session quits. So for example in my profile file I start a process which needs to run in the background for quite a lot of tasks and I would like to automatically close that process when I close the console. Is there some function the PowerShell automatically calls when closing the session as it does with prompt when displaying the prompt?

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • Will a higher hard drive size affect performance

    - by user273010
    My laptop came with a 500 GB hard drive. I use my laptop for storing my digital photographs, and only have about 14 GB of file storage left on the original hard drive. I have a 750 GB external hard drive, but am leery of relying on it for primary storage as I tend to knock things over and it has already crashed once and I lost a lot of the files. I am looking at a 1 TB internal hard drive, but am concerned if storing so much data will affect the computer's performance. Should I also increase RAM from 4 to 8 GB (the limit for my 64-bit, Windows 7, Asus A54C laptop)?

    Read the article

  • iMac boot from linux partition on external drive

    - by user74757
    I have the following "setup:" iMac (no internal drive/dead) --------- (Firewire) ------- [[MAC OS X]] | | | | (USB) | | | | [[MISC STORAGE PARTITION] [MISC STORAGE PARTITION] [EXT2 UBUNTU PARTITION]] I routinely use the firewire drive to boot MAC OS X. However, I would like to boot from the linux partition of the USB drive. This linux partition had linux installed on it from a live cd, and during that process, I told the installer to install GRUB on the usb drive (which happened to be /dev/sdd). My question is, how do I get this disk to show up during the iMac option-boot? Currently, only the firewire MAC OS X option shows up. I have read about rEFIT, but that appears to install it to the Mac OS X disk (would that still work?)... Also mentioned was installing rEFIT to the internal EFI system partition, but I don't know if that is wise.

    Read the article

  • Start a ZFS RAIDZ zpool with two discs then add a third?

    - by Doug S.
    Let's say I have two 2TB HDDs and I want to start my first ZFS zpool. Is it possible to create a RAIDZ with just those two discs, giving me 2TB of usable storage (if I understand it right) and then later add another 2TB HDD bringing the total to 4TB of usable storage. Am I correct or does there need to be three HDDs to start with? The reason I ask is I already have one 2TB drive I'm using that's full of files. I want to transition to a zpool but I'd rather only buy two more 2TB drives if I can. From what I understand, RAIDZ behaves similarly to RAID5 (with some major differences, I know, but in terms of capacity). However, RAID5 requires 3+ drives. I was wondering if RAIDZ has the same requirement. If I have to, I can buy the three drives and just start there, later adding the fourth, but if I could start with two and move to three that would save me $80.

    Read the article

  • Problem with USB drivers (Windows-XP)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >