Search Results

Search found 636 results on 26 pages for 'dumb questioner'.

Page 18/26 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • PHP cannot connect to MySQL

    - by yogal
    Hello, I recently installed Apache 2 + PHP 5.3.1 + MySQL 5.1.44 on my Windows 7 64bit machine following this guide: http://sleeplessgeek.blogspot.com/2010/01/setting-up-apache-php-mysql-phpmyadmin.html It all went fine, php is working great (even with XDebug) but I cannot connect to mysql server. A simple script I wrote to test connection (yes, root has no pass): $username = "root"; $password = ""; $database = "test"; $hostname = "localhost"; $conn = mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL Database!!"); It prints this error after 60sec of timeout: Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. I can connect to mysql using cmdmysql -h localhost -u root Services are working properly. There also seems to be a problem with PhpMyAdmin (using 3.2.5). As soon as I type user and pass the page loads and turns blank (content-lenght in headers is 0 but status code is 302 Found). Looks like something wrong with cookies (my auth method). I hope someone has a clue, it has to be something dumb simple I missed. Thanks in advance.

    Read the article

  • Gnome-panel disappearance in Ubuntu 10.10

    - by jurchiks
    Just today, after about a week of somewhat normal running (I'm a total beginner in Linux and the level of amazingly stupid problems I encountered made me go nuts), today my panel disappeared (the one with Applications/System menus, you'd call it taskbar in Windows). Also, Alt+F2 doesn't work and Ctrl+Alt+Backspace has no effect (I'd think it's supposed to do something). I tried the solution posted here: Panel doesn't show at startup at Ubuntu 10.04 No luck, didn't change absolutely anything. I also couldn't find the .gconf and .gconfd folders using search, so couldn't try that option. There were ones that had same names but without the dot though, but there were several so I didn't risk. What could possibly be the reason for this? All I did yesterday was try to install some updates (another extremely dumb problem - doesn't allow to install even the official updates - "insecure sources" or smth like that, tried fixing it with some tutorials on the net but in the end it worked only for half a day and went back to refusal mode :@) and very few tools from the Ubuntu Software Center, but nothing that would change system settings just by installing it.

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • Windows desktop virutalization instead of replacing work stations

    - by Chris Marisic
    I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine. It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server. Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this. How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this? What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop. Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation. Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • Messed up partitions... system will not boot!

    - by someguy
    I did a really dumb thing. cfdisk threw an error at me saying "FATAL ERROR: Bad primary partition 3: Partition ends in the final partial cylinder", so I installed Partition Table Doctor to see if I could fix the problem. When the program started up, it told me there were problems with my partitions, and asked if I wanted them fixed (cannot remember real message, but I believe it had something to do with the cylinder boundaries), so, blindly, without thinking of the consequences, I did. Now, my system will not boot. I tried booting from the Windows 7 installation CD. I went to install a fresh copy, but it said that "No drives were found". I then opened up diskpart. According to diskpart, there is only one partition, containing one volume, assigned the letter "C". Before, I had four partitions! It is also saying that the file system is RAW. Is there any way I can fix this? I have important data that I do not want to lose. Later on... I tried fdisk with the option -l, which lists the partition table(s), and this is what I got: Ignoring extra extended partition 4 Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 64 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x163df116 Device Boot Start End Blocks Id System /dev/sda1 6 18 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 18 7851 62918572+ 7 HPFS/NTFS /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30403 139203193 7 HPFS/NTFS I don't know if this will help, but it's extra information, at least. Also, this is how I had my partitions: 40MB (Unallocated) 100MB (System Reserved) 60GB (Windows, C:) 40GB (Was reserved for secondary OS) ~132GB (Home, E:)

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • How do I encapsulate the application server from the web and database servers?

    - by SNyamathi
    So I've been doing some reading and it seems like the best practice would be to have separate database, application, and web servers. There are a few things that I've failed to understand - please feel free to recommend any reading materials that would address these topics. Database (assume MySQL) Application server communication: Does the database server do any sort of checks on the SQL commands sent / returned, or is it just a "dumb pipe" that responds to SQL commands by spitting back data? Application server (assume Tomcat) Web Server Almost the reverse here, is it the web server that is more of a pipe to the internet that forwards requests to the application server and spits back responses? I'm not wording this well, but I'm trying to ask - is it the application server that is responsible for validating data received by from requests? ex: Parsing POSTs Validating user logins Encrypting decrypting data Furthermore, how do these two servers communicate? I'm trying to keep things as flexible as possible here, so while I could write a web server in Java and use Java to communicate between the web and app server, that doesn't sound very modular. What if I want to use Python or some other language to replace the web server later on? What if I want to make a non-web facing application used in house written in C++ or something.

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Windows 7 BSOD, safe mode working,

    - by mil0ck
    I need help getting my dualboot with Windows 7 and Ubuntu 12.04 to work, and since I'm not to experienced with computers I've been pretty dumb. Here's what I've done, quite shortend though: First removed a Ubuntu 10.04 partion, and replaced bootloader using EasyBCD Installed Ubuntu 12.04 on same partion as before Decided I need more space so I shrinked my C: partion (Windows partion) using Here, after rebooting my computer, I was stuck a grub rescue, when booting up the computer I fixed that by using SuperGrubDisk Rescatux and then using my Windows Vista install disk to repair the computer (computer is using Windows 7) I know re-installed Ubuntu 12.04 on the linux partion and got the GRUB-bootloader working Then, after using my computer for several hours, I installed beta drivers (version 304.79) for my GeForce GTS 240 and then rebooted my computer. At the first boot (after reboot) my computer just crashed, and here I am When trying to boot Windows 7 now I get a Blue Screen Of Death. I can though boot in to safe mode with everything working. I have uninstalled the beta drivers and installed the same one as before but still the same problem. I have tried all the commands in Bootrec.exe and none is working. I can't neither find an OS when using Bootrec.exe/ScanOs. I have also tried running: sfc/scannow and that comes out clean. Short: My harddrive and files seems to be intact but when booting I get bluescreen. I can though boot in to safemode with everything working. I need help Thanks to anyone who even took the time to read that. //mil0ck

    Read the article

  • newbie: Allow domain users to change power-savings settings

    - by user65007
    I've just recently installed SMS 2011 on a server and added several computers to it's domain. Now I've noticed that I cannot change power settings (even when logged in as user who is in Domain Administrator role, let's call it Admin for future reference). After some googling I ended up adding Admin to the local administrators group using Group Policy Management Editor (as I have no experience in server administration I'm not sure I did it right: I went to Policy Management, selected Forest: xxxxx - Domains - xxxxx - Group Policy Objects - Windows SBS Client - Windows 7 and Windows Vista Policy - go to Settings tab on the right and right-click on anything and select Edit to go to Group Policy Mangement Editor - User Configuration - Preferences - Control Panel Settings - Local Users and Groups - right-click on it and select New - Local Group, then set Action to "Update", Group Name to "Administrators (built-in)", and added Admin to Members). After that I was able to change the power-savings settings on client computers(when logged in as Admin). Now the question: what should I do to allow any domain user to change this settings? Notice, I do not want to force some predefined power plan to all computers, I want to set it up so that any domain user on any client computer would be able to select a different power plan and to make any adjustments to the selected one. Thank you for any suggestions, just keep in mind that I'm newbie (but not completely dumb), so please answer accordingly :)

    Read the article

  • Easiest way to find out if user has either Windows 7 or Vista (through telephone support)?

    - by Rabarberski
    If you have to provide some initial troubleshooting support by phone [or email], and you don't have access to the PC itself, what is the easiest and most foolproof question to find out if the 'dumb' user is using either Windows 7 or Windows Vista? For example: determining if the user has either Windows XP or Windows Vista/7 is easy. Just ask the user if the button at the left bottom corner is (a) either square with the word 'Start' on it, or (b) it is a round button. But how to determine the difference between Vista and 7? Edit: For all the existing answers the user has to type something, and do it correctly. Sometimes even that is already hard for a computer illiterate user. My XP example just requires looking. If it exists (although I am afraid it doesn't), I think a solution that is just based on something this is visually different between Vista and 7 would stand above all others. (Which makes Dan's suggestion to turn over the box and look at the label" not so stupid). Perhaps the small 'show desktop' rectangle at the right side of the task bar (was that present in Vista)?

    Read the article

  • Using Plesk for webhosting on Ubuntu - Security risk or reasonably safe?

    - by user66952
    Sorry for this newb-question I'm pretty clueless about Plesk, only have limited debian (without Plesk) experience. If the question is too dumb just telling me how to ask a smarter one or what kind of info I should read first to improve the question would be appreciated as well. I want to offer a program for download on my website hosted on an Ubuntu 8.04.4 VPS using Plesk 9.3.0 for web-hosting. I have limited the ssh-access to the server via key only. When setting up the webhosting with Plesk it created an FTP-login & user is that a potential security risk that could bypass the key-only access? I think Plesk itself (even without the ftp-user-account) through it's web-interface could be a risk is that correct or are my concerns exaggerated? Would you say this solution makes a difference if I'm just using it for the next two weeks and then change servers to a system where I know more about security. 3.In other words is one less likely to get hacked within the first two weeks of having a new site up and running than in week 14&15? (due to occurring in less search results in the beginning perhaps, or for whatever reason... )

    Read the article

  • I have just created a subnet for a local network, connecting to a standalone server on another network, now I cannot connect to the internet

    - by Seth
    I am just learning some new aspects of servers and networking. We have a network of 5 subnets that all interconnect with each-other. In order to get two computers on the subnet that we were setting up, I changed the IP from the subnet where the standalone server is on (where they used to be set up)to the local subnet we are remotely hooking up. Likewise I also changed the gateway to coincide with the new subnet. Only problem is that since doing this, I am unable to establish a connection to the internet. I can ping the server and correspong gateway & DNS server, but cannot get connected to the internet. We do have a dumb-switch (non-programmable) connected that receives both the internet and private network inputs and distributes (or should do so) to about 5 other computers. Bottom line, I cannot currently connect to the internet, and am wondering what could be causing this.. It is likely something very obvious and pardon me being more vague than I probably should be, but I could use some help resolving this! Thanks for any help!

    Read the article

  • XamlParseException using Silverlight Toolkit control in Expression Blend

    - by Dan Auclair
    I am having a strange issue opening up my UserControl in Expression Blend when using a Silverlight Toolkit control. My UserControl uses the toolkit's ListBoxDragDropTarget as follows: <controlsToolkit:ListBoxDragDropTarget mswindows:DragDrop.AllowDrop="True" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch"> <ListBox ItemsSource="{Binding MyItemControls}" ScrollViewer.HorizontalScrollBarVisibility="Disabled"> <ListBox.ItemsPanel> <ItemsPanelTemplate> <controlsToolkit:WrapPanel/> </ItemsPanelTemplate> </ListBox.ItemsPanel> </ListBox> </controlsToolkit:ListBoxDragDropTarget> Everything works as expected at runtime and looks fine in Visual Studio 2008. However, when I try to open my UserControl in Blend I get XamlParseException: [Line: 0 Position: 0] and I can not see anything in the design view. More specifically Blend complains: The element "ListBoxDragDropTarget" could not be displayed because of a problem with System.Windows.Controls.ListBoxDragDropTarget: TargetType mismatch. My silverlight application is referencing System.Windows.Controls.Toolkit from the Nov. 2009 toolkit release, and I've made sure to include these namespace declarations for the ListBoxDragDropTarget: xmlns:controlsToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Toolkit" xmlns:mswindows="clr-namespace:Microsoft.Windows;assembly=System.Windows.Controls.Toolkit" If I comment out the ListBoxDragDropTarget control wrapper and just leave the ListBox I can see everything fine in the design view without errors. Furthermore, I realized this is happening with a variety of Silverlight Toolkit controls because if I comment out ListBoxDragDropTarget and replace it with <controlsToolkit:BusyIndicator /> the same exact error occurs in Blend. What is even weirder is that if I start a brand new silverlight application in blend I can add these toolkit elements without any kind of error, so it seems like something dumb that is happening with my project references to the toolkit assemblies. I'm pretty sure this has something to do with loading the default styles for the toolkit controls from its generic.xaml, since the error has to do with the TargetType and Blend is probably trying to load up the default styles. Has anyone encountered this issue before or have any ideas as to what may be my problem?

    Read the article

  • Writing a docx file to a BLOB using Java 1.4 inside Oracle 10g

    - by Jon Renaut
    I'm trying to generate a blank docx file using Java, add some text, then write it to a BLOB that I can return to our document processing engine (a custom mess of PL/SQL and Java). I have to use the 1.4 JVM inside Oracle 10g, so no Java 1.5 stuff. I don't have a problem writing the docx to a file on my local machine, but when I try to write to BLOB, I'm getting garbage. Am I doing something dumb? Any help is appreciated. Note in the code below, all the get[name]Xml() methods return an org.w3c.dom.Document. public void save(String fileName) throws Exception { ZipOutputStream zos = new ZipOutputStream(new FileOutputStream(fileName)); addEntry(zos, getDocumentXml(), "word/document.xml"); addEntry(zos, getContentTypesXml(), "[Content_Types].xml"); addEntry(zos, getRelsXml(), "_rels/.rels"); zos.flush(); zos.close(); } public java.sql.BLOB save() throws Exception { java.sql.Connection conn = DbUtilities.openConnection(); BLOB outBlob = BLOB.createTemporary(conn, true, BLOB.DURATION_SESSION); outBlob.open(BLOB.MODE_READWRITE); ZipOutputStream zos = new ZipOutputStream(outBlob.setBinaryStream(0L)); addEntry(zos, getDocumentXml(), "word/document.xml"); addEntry(zos, getContentTypesXml(), "[Content_Types].xml"); addEntry(zos, getRelsXml(), "_rels/.rels"); zos.flush(); zos.close(); return outBlob; } private void addEntry(ZipOutputStream zos, Document doc, String fileName) throws Exception { Transformer t = TransformerFactory.newInstance().newTransformer(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); t.transform(new DOMSource(doc), new StreamResult(baos)); ZipEntry ze = new ZipEntry(fileName); byte[] data = baos.toByteArray(); ze.setSize(data.length); zos.putNextEntry(ze); zos.write(data); zos.flush(); zos.closeEntry(); }

    Read the article

  • Silverlight: Problem using CellEditingTemplate

    - by Charles
    Hello Silverlight gurus, I am experiencing a problem with the DataGrid where my data-bound object's properties are not being updated when using the CellTemplate/CellEditingTemplate: <data:DataGridTemplateColumn Header="Text"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Text}" ></TextBlock> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <TextBox Text="{Binding Text, Mode=TwoWay}" /> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> I am binding to a code-gen'd entity via the RIA Services. I've added an event handler to the PropertyChanged event, and it is never fired. However, if I do not use a template and instead use a DataGridTextColumn, everything works fine. I'm sure this sounds like an easy fix - I'm only using a TextBox in my editing template, so why not us a DataGridTextColumn? The problem is that I want to have a multi-line textbox, so using the DataGridTextColumn is not an option. Any suggestions? Do you know of any differences between using a CellEditingTemplate containing a single TextBox and using a DataGridTextColumn? Thanks, -Charles [UPDATE] I posted a bug report here: http://silverlight.net/forums/p/118729/267521.aspx I can't imagine that this is "as-designed"... If someone else has known about this and I'm just being dumb, I'd appreciate an explanation - I'd prefer embarrassment over ignorance :).

    Read the article

  • Asynchronous pages in the ASP.NET framework - where are the other threads and how is it reattached?

    - by rkrauter
    Sorry for this dumb question on Asynchronous operations. This is how I understand it. IIS has a limited set of worker threads waiting for requests. If one request is a long running operation, it will block that thread. This leads to fewer threads to serve requests. Way to fix this - use asynchronous pages. When a request comes in, the main worker thread is freed and this other thread is created in some other place. The main thread is thus able to serve other requests. When the request completes on this other thread, another thread is picked from the main thread pool and the response is sent back to the client. 1) Where are these other threads located? 2) IF ASP.NET likes creating new threads, why not increase the number of threads in the main worker pool - they are all running on the same machine anyway? 3) If the main thread hands off a request to this other thread, why does the request not get disconnected? It magically hands off the request to another worker thread somewhere else and when the long running process completes, it picks a thread from the main worker pool and sends response to the client. I am amazed...but how does that work?

    Read the article

  • How do you keep your business rules DRY?

    - by Mario
    I periodically ponder how to best design an application whose every business rule exists in just a single location. (While I know there is no proverbial “best way” and that designs are situational, people must have a leaning toward one practice or another.) I work for a shop where they prefer to house as much of the business rules as possible in the database. This requires developers in many cases to perform identical front-end validations to avoid sending data to the database that will result in an exception—not very DRY. It grates me anytime I find myself duplicating any kind of logic—even lowly validation logic. I am a single-point-of-truth purist to an anal degree. On the other end of the spectrum, I know of shops that create dumb databases (the Rails community leans in this direction) and handle all of the business logic in a separate tier (in Rails the models would house “most” of this). Note the word “most” which implies that some business logic does end up spilling into other places (in Rails it might spill over into the controllers). In way, a clean separation of concerns where all business logic exists in a single core location is a Utopian fantasy that’s hard to uphold (n-tiered architecture or not). Furthermore, is see the “Database as a fortress” and would agree that it should be built on constraints that cause it to reject bad data. As such, I hold principles that cause a degree of angst as I attempt to balance them. How do you balance the database-as-a-fortress view with the desire to have a single-point-of-truth?

    Read the article

  • Python coding test problem for interviews

    - by Kal
    I'm trying to come up with a good coding problem to ask interview candidates to solve with Python. They'll have an hour to work on the problem, with an IDE and access to documentation (we don't care what people have memorized). I'm not looking for a tough algorithmic problem - there are other sections of the interview where we do that kind of thing. The point of this section is to sit and watch them actually write code. So it should be something that makes them use just the data structures which are the everyday tools of the application developer - lists, hashtables (dictionaries in Python), etc, to solve a quasi-realistic task. They shouldn't be blocked completely if they can't think of something really clever. We have a problem which we use for Java coding tests, which involves reading a file and doing a little processing on the contents. It works well with candidates who are familiar with Java (or even C++). But we're running into a number of candidates who just don't know Java or C++ or C# or anything like that, but do know Python or Ruby. Which shouldn't exclude them, but leaves us with a dilemma: On the one hand, we don't learn much from watching someone struggle with the basics of a totally unfamiliar language. On the other hand, the problem we use for Java turns out to be pretty trivial in Python (or Ruby, etc) - anyone halfway competent can do it in 15 minutes. So, I'm trying to come up with something better. Surprisingly, Google doesn't show me anyone doing something like this, unless I'm just too dumb to enter the obvious search term. The best idea I've come up with involves scheduling workers to time slots, but it's maybe a little too open-ended. Have you run into a good example? Or a bad one? Or do you just have an idea?

    Read the article

  • Business Logic Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The app has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others. Have in mind that a callback on SalesOrder is not enough, since it depends on where it is called (no field for that), depends on the class of the user, among other stuff not visible for the model, or in some cases it would take long for the model to process.

    Read the article

  • Visual Studio Macros on 64 bit fail with COM error

    - by bruce.kinchin
    I'm doing some javascript development and found a cool macro to region my code ("Using #region Directive With JavaScript Files in Visual Studio"). I used this on my 32 bit box, and it worked first time. (Visual Studio 2008 SP1, Win7) For easy of reference the macro is: Option Strict Off Option Explicit Off Imports System Imports EnvDTE Imports EnvDTE80 Imports System.Diagnostics Imports System.Collections Public Module JsMacros Sub OutlineRegions() Dim selection As EnvDTE.TextSelection = DTE.ActiveDocument.Selection Const REGION_START As String = "//#region" Const REGION_END As String = "//#endregion" DTE.ExecuteCommand("Edit.StopOutlining") selection.SelectAll() Dim text As String = selection.Text selection.StartOfDocument(True) Dim startIndex As Integer Dim endIndex As Integer Dim lastIndex As Integer = 0 Dim startRegions As Stack = New Stack() Do startIndex = text.IndexOf(REGION_START, lastIndex) endIndex = text.IndexOf(REGION_END, lastIndex) If startIndex = -1 AndAlso endIndex = -1 Then Exit Do End If If startIndex <> -1 AndAlso startIndex < endIndex Then startRegions.Push(startIndex) lastIndex = startIndex + 1 Else ' Outline region ... selection.MoveToLineAndOffset(CalcLineNumber(text, CInt(startRegions.Pop())), text.Length) selection.MoveToLineAndOffset(CalcLineNumber(text, endIndex) + 1, 1, True) selection.OutlineSection() lastIndex = endIndex + 1 End If Loop selection.StartOfDocument() End Sub Private Function CalcLineNumber(ByVal text As String, ByVal index As Integer) Dim lineNumber As Integer = 1 Dim i As Integer = 0 While i < index If text.Chars(i) = vbCr Then lineNumber += 1 i += 1 End If i += 1 End While Return lineNumber End Function End Module I then tried to use the same macro on two separate 64 bit machines (Win7 x64), identical other than the 64 bit OS version and it fails to work. Stepping through it with the Visual Studio Macros IDE, it fails the first time on the DTE.ExecuteCommand("Edit.StopOutlining") line with a COM error (Error HRESULT E_FAIL has been returned from a call to a COM component). If I attempt to run it a second time, I can run it from the Macro Editor with no issue, but not from within Visual Studio with the macro explorer 'run macro' command. I have reviewed the following articles without finding anything helpful: Stackoverflow: Visual Studio 2008 macro only works from the Macro IDE, not the Macro Explorer Recorded macro does not run; Failing on DTE.ExecuteCommand Am I missing something dumb?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >