Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 138/558 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Asterisk server firewall script allows 2-way audio from incoming calls, but not on outgoing?

    - by cappie
    I'm running an Asterisk PBX on a virtual machine directly connected to the Internet and I really want to prevent script kiddies, l33t h4x0rz and actual hackers access to my server. The basic way I protect my calling-bill now is by using 32 character passwords, but I would much rather have a way to protect The firewall script I'm currently using is stated below, however, without the established connection firewall rule (mentioned rule #1), I cannot receive incoming audio from the target during outgoing calls: #!/bin/bash # first, clean up! iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD DROP # we're not a router iptables -P OUTPUT ACCEPT # don't allow invalid connections iptables -A INPUT -m state --state INVALID -j DROP # always allow connections that are already set up (MENTIONED RULE #1) iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # always accept ICMP iptables -A INPUT -p icmp -j ACCEPT # always accept traffic on these ports #iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT # always allow DNS traffic iptables -A INPUT -p udp --sport 53 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT # allow return traffic to the PBX iptables -A INPUT -p udp -m udp --dport 50000:65536 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 10000:20000 -j ACCEPT iptables -A INPUT -p udp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -p tcp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -m multiport -p udp --dports 10000:20000 iptables -A INPUT -m multiport -p tcp --dports 10000:20000 # IP addresses of the office iptables -A INPUT -s 95.XXX.XXX.XXX/32 -j ACCEPT # accept everything from the trunk IP's iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT # accept everything on localhost iptables -A INPUT -i lo -j ACCEPT # accept all outgoing traffic iptables -A OUTPUT -j ACCEPT # DROP everything else #iptables -A INPUT -j DROP I would like to know what firewall rule I'm missing for this all to work.. There is so little documentation on which ports (incoming and outgoing) asterisk actually needs.. (return ports included). Are there any firewall/iptables specialists here that see major problems with this firewall script? It's so frustrating not being able to find a simple firewall solution that enabled me to have a PBX running somewhere on the Internet which is firewalled in such a way that it can ONLY allows connections from and to the office, the DNS servers and the trunk(s) (and only support SSH (port 22) and ICMP traffic for the outside world). Hopefully, using this question, we can solve this problem once and for all.

    Read the article

  • Good bitmap fonts with big sizes and unicode support

    - by bitonic
    I really like bitmap fonts for programming/terminal. As far as I know there are two bitmap fonts with good unicode support: Unifont Fixed The problem is that I have a really high resolution screen, and they're both too small. Fixed does include a large size (10x20) but it looks really bad (it's basically always bold, and bold is a different face). Are there any other bitmap fonts with unicode support and large sizes? Terminus is the only font with a decent size but it doesn't have good unicode support. Having good coverage for mathematical symbols would be enough, since that's what I need.

    Read the article

  • Is it safe to remove Per user queued Windows Error Reporting?

    - by Rewinder
    I was cleaning up my laptop hard-disk, running Windows 7, and as part of the process I ran the Disk Cleanup utility. To my surprise I saw 2 items in the list that were quite large (both ~300MB). Per user queued Windows Error Reporting System queued Windows Error Reporting I guess I had never noticed these, because they were never that big. So, what are these items? Any particular reason why they became so large all of a sudden? And finally, is it safe to remove them?

    Read the article

  • What is the correct network configuration for a devStack VM (virtualbox)?

    - by Olivier
    Usually when I setup a new Ubuntu VM, i keep the eth0 in NAT mode to get the internet & I add a eth1 interface in HostOnly mode so that I can ssh. But using this devStack guide : Running a Cloud in a VM, it looks like it tried to use eth0 as the public interface (install got stuck because eth0 lost the network). I know an OpenStack setup usually requires two NICs, so I'm wondering what is the correct configuration for my VM.

    Read the article

  • Aligning Numbered Bullet Points in Word 2007

    - by Frustratedwithbullets
    Hello, I am putting together a very large business manual which incorportaes numbered heading, steps to follow, diagrams, etc. When using the bullet points, they align perfectly as I work through the processes. However when I include a diagram, or something different from the "norm" of text, the alignment changes. I would like all the bullets points to be aligned in the whole document regardless of where they appear in the document. Is there a way to save the settings so that the bullets always appear in the same position? Currently I am having to reset the indents by dragging the tabs on the ruler. This will be a large document, so I don't want to manually adjust the numbered bullets every time. Help would be greatly appreciated. Thanks very much.

    Read the article

  • Choose source interface for PPTP VPN on Ubuntu

    - by Emyl
    I have an Ubuntu Virtualbox guest with two network interface, eth0 (NAT) and eth1 (bridged). I want to connect to a PPTP VPN using eth1, but I don't know how to specify which interface to use. If i just try: sudo pon myvpn nodetach It fails with: Using interface ppp0 Connect: ppp0 <--> /dev/pts/1 Modem hangup Connection terminated. Looking at routes with route seems to indicate that eth0 is being used: x.x.x.x.no 10.0.2.2 255.255.255.255 UGH 0 0 0 eth0

    Read the article

  • Why can't we reach some (but not all) external web service via VPN connection?

    - by Paul Haldane
    At work (UK university) we use a set of Windows servers running WS2008R2 and RRAS which offer VPN service to students in our accommodation. We do this to associate the network connections with individuals. Before they've connected to the VPN all they can talk to is the stuff thats needed to setup the VPN and a local web site with documentation on how to connect. Medium term we'll probably replace this but it's what we're using at the moment. VPN on the 2008 servers allocates client a private (10.x) address. Access to external sites is through NAT on the campus routers (same as any other directly connected client on a private address). Non-VPN connections aren't seeing this problem. Older servers run WS 2003 and ISA2004. That setup works but has become unreliable under load. Big difference there was that we were allocating non-RFC1918 addresses to the clients (so no NAT required). Behaviour we're seeing is that once connected to the VPN, clients can reach local web sites (that is sites on the campus network) but only some external sites. It seems (but this may be chance) that the sites we can reach are Google ones (including YouTube). We certainly have trouble reaching Microsoft's Office 365 service (which is a pain because that's where mail for most of our students is). One odd bit of behaviour is that clients can fetch (using wget on a Windows 7 client) http://www.oracle.com/ (which gets a 301 redirect) but hangs when asked to fetch http://www.oracle.com/index.html (which is what the first URL redirects to). Access works reliably if we configure clients to use our local web proxies (Squid). My gut tells me that this is likely to be something in the chain dropping replies either based on HTTP inspection or the IP address in the reply. However I'm puzzled about why we're seeing this with the VPN clients. Plan for tomorrow (when I'm back in the office) is to setup a web server on external connection so that we can monitor behaviour at both ends of the conversation (hoping that the problem manifests itself with our test server). Any suggestions for things we should be looking at?

    Read the article

  • Shortcut with arguments in Debian

    - by Duncan
    I have a volume on a debian server which contains a large number of images at full resolution in various folders. What I'd like to do is have a separate sort of browse proxy folder which contains lower quality browse copies of these to enable users to access them for viewing over lower speed dial in accounts. I'd ideally like these to be created on the fly using ImageMagick so there isnt the need to store the large number of browse copies full time and worry about keeping them up to date etc The way I'd invisaged this happening is the browse proxy folder containing a duplicate file and folder structure but with symlinks pointing to a script to transform them with the file path as an argument. Except I know this isnt possible with symlinks so am wondering if there's another way of doing this on linux. On windows shortcuts can take arguments and I'm wondering how to do the same on a Linux platform? (or perhaps I'm going about this the wrong way?)

    Read the article

  • Client unable to reach Internet through OpenVPN

    - by Carroarmato0
    The clients can all connect through OpenVPN. OpenVPN serves the following pool: server 10.8.0.0 255.255.255.0 I've configured the server's iptable with the following rule: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE and echo 1 /proc/sys/net/ipv4/ip_forward This used to work back on the old vps I used. Now I've migrated to a vps which has ipv6 connectivity. Is it possible that Ipv6 has something to do with the fact that the clients can't reach the internet?

    Read the article

  • GitHub updating repository?

    - by user1804933
    I am trying to setup GitHub on my server and gotten to the point where I am running the command "git push -u origin master". However, a large file was detected and the following error was received: remote: error: GH001: Large files detected. remote: error: Trace: 5520a70fd2eeaa2eafd7de049a590fb5 remote: error: See http://git.io/iEPt8g for more information. remote: error: File app/logs/dev.log is 2041.59 MB; this exceeds GitHub's file size limit of 100 MB I ended up deleting that file and tried adding the git again but I keep running into that error. Any ideas on how to work around this?

    Read the article

  • SAN for Medium Business - Where to start? [closed]

    - by Henson
    I've always run Linux on my home computers, and done PC repair for years, but this is my first experience with needing to buy a SAN. I thought I was knowledgeable, but I feel a bit lost. I need to be able to support 25 VMs, which are currently managed through vSphere. The company I'm at is growing quickly though, so I'd like to plan for the future. Ideally, I want a solution that I can just tack arrays onto and manage as one large, iSCSI drive. Suggestions? Good resources? If I can find something that appears to software as one large drive, am I better off going with a solution like FreeNAS or Starwind, or an all-in-one proprietary solution like NetApp? Cost, is (of course, and always I'm sure) an issue.

    Read the article

  • Memory Speeds: 1x4GB or 2x2GB? [closed]

    - by Dasutin
    When it comes to speeds what is faster having one 4GB module in your system or having two 2GB modules. I'm not taking in the fact that the system could have dual channel capabilities. Also what about a server environment? Would it be better to have one large, high density module or break it up into several modules for speed and price? I heard an engineer at my office having a discussion with an employee. He said that its better in all situations to have one large capacity modules instead of breaking it up. It would be cheaper and perform faster. He also said it would take longer for the computer to access what it needed if there were more modules instead of having just one. His explanation didn't seem right to me and I thought I would post this question here to see what other people thought.

    Read the article

  • SDL2 sprite batching and texture atlases

    - by jms
    I have been programming a 2D game in C++, using the SDL2 graphics API for rendering. My game concept currently features effects that could result in even tens of thousands of sprites being drawn simultaneously to the screen. I'd like to know what can be done for increasing rendering efficiency if the need arises, preferably using the SDL2 API only. I have previously given a quick look at OpenGL-based 2D rendering, and noticed that SDL2 lacks a command like int SDL_RenderCopyMulti(SDL_Renderer* renderer, SDL_Texture* texture, const SDL_Rect* srcrects, SDL_Rect* dstrects, int count) Which would permit SDL to benefit from two common techniques used for efficient 2D graphics: Texture batching: Sorting sprites by the texture used, and then simultaneously rendering as many sprites that use the same texture as possible, changing only the source area on the texture and the destination area on the render target between sprites. This allows the encapsulation of the whole operation in a single GPU command, reducing the overhead drastically from multiple distinct calls. Texture atlases: Instead of creating one texture for each frame of each animation of each sprite, combining multiple animations and even multiple sprites into a single large texture. This lessens the impact of changing the current texture when switching between sprites, as the correct texture is often ready to be used from the previous draw call. Furthemore the GPU is optimized for handling large textures, in contrast to the many tiny textures typically used for sprites. My question: Would SDL2 still get somewhat faster from any rudimentary sprite sorting or from combining multiple images into one texture thanks to automatic video driver optimizations? If I will encounter performance issues related to 2D rendering in the future, will I be forced to switch to OpenGL for lower level control over the GPU? Edit: Are there any plans to include such functionality in the near future?

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • SQL SERVER – Fastest Way to Restore the Database

    - by pinaldave
    A few days ago, I received following email: “Pinal, We are in an emergency situation. We have a large database of around 80+ GB and its backup is of 50+ GB in size. We need to restore this database ASAP and use it; however, restoring the database takes forever. Do you think a compressed backup would solve our problem? Any other ideas you got?” First of all, the asker has already answered his own question. Yes; I have seen that if you are using a compressed backup, it takes lesser time when you try to restore a database. I have previously blogged about the same subject. Here are the links to those blog posts: SQL SERVER – Data and Page Compressions – Data Storage and IO Improvement SQL SERVER – 2008 – Introduction to Row Compression SQL SERVER – 2008 – Introduction to New Feature of Backup Compression However, if your database is very large that it still takes a few minutes to restore the database even though you use any of the features listed above, then it will really take some time to restore the database. If there is urgency and there is no time you can spare for restoring the database, then you can use the wonderful tool developed by Idera called virtual database. This tool restores a certain database in just a few seconds so it will readily be available for usage. I have in depth written my experience with this tool in the article here SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual database. Let me know your experience in this scenario. Have you ever needed your database backup restored very quickly, what did you do in that scenario. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Desktop Virtualization at HIMSS 2011

    - by chris.kawalek(at)oracle.com
    The HIMSS Conference is an extremely important industry trade show put on by The Healthcare Information and Management Systems Society. It's being held in Florida starting this Sunday, February 20th. Their slogan, "Linking people, potential, and progress" could be true of Oracle desktop virtualization as well! The Oracle desktop virtualization group has worked very closely with the Oracle healthcare business unit to have a large presence at this show, and I wanted to tell you a bit about what we're doing: - All Oracle demos are being done on Sun Ray Clients That's right, every demo pod in the large Oracle booth will have a Sun Ray Client with each demo tied to a smart card. Too many people at your demo station? Pop your card out and go to a different one. We'll also be demoing Oracle desktop virtualization at a dedicated demo station, too. This is great stuff! Find Oracle at booth #1651 Oracle's page about HIMSS - Focus Group - Caregiver Mobility with Oracle Sun Ray Clients and Desktop Virtualization Feb 22, 3:15-4:15 PM This focus group will be for customers interested in Oracle desktop virtualization. It's invitation only, but you can comment on this blog post and we can give you info on how to attend (your comment won't be made public). - Solution Session - Fast, Secure, Workflow Optimized: Inexpensive Access to Care Information is Possible Inside and Outside of the Hospital Feb 23, 4:15 PM Booth #685, Wireless and Mobility Theatre Oracle's Adam Workman will cover caregiver mobility and the benefits of Oracle desktop virtualization to healthcare organizations. - New healthcare solutions page on oracle.com We've created a page dedicated to content involving desktop virtualization and healthcare. This will be your onestop shop if looking for desktop virtualization and healthcare information. - New desktop virtualization and healthcare solution data sheet This document outlines how we define "Caregiver Mobility" and how Oracle products are used to facilitate quicker, more secure access to patient data. We'll have some more updates from the show next week. It looks like its going to be an exciting event! -Chris

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • The Grenelle II Act In France: A Milestone Towards Integrated Reporting

    - by Evelyn Neumayr
    By Elena Avesani, Principal Product Strategy Manager, Oracle In July of 2010, France took a significant step towards mandating integrated sustainability and financial reporting for all large companies with a new law called Grenelle II. Article 225 of Grenelle II requires that many listed companies on the French stock exchanges incorporate information on the social and environmental consequences of their activities into their annual reports, as well as their societal commitments for sustainable development. The decree that implements Article 225 of Grenelle II was passed in April 2012. Grenelle II is the strongest governmental mandate yet in support of sustainability reporting. The law defines the phase-in process, with large listed companies expected to comply in their 2012 reports and smaller companies expected to comply with their 2014 annual reports. This extra-financial information will have to be embedded in the annual management report, approved by the Board of Directors, verified by a third-party body and given to the annual general meeting. The subjects that must be reported on are grouped into Environmental, Social, and Governance categories. Oracle solutions can help organizations integrate financial and sustainability reporting and provide a more accurate and auditable approach to collecting, consolidating, and reporting such environmental, social, and economic metrics. Through Oracle Environmental Accounting and Reporting and Oracle Hyperion Financial Management Sustainability Starter Kit organizations can collect environmental, social and governance data and collect and consolidate corporate sustainability reporting data from multiple systems and business units. For more information about these solutions please contact [email protected].

    Read the article

  • Step Aside Google

    - by David Dorf
    Step aside Google. While search will always be a huge part of the web, I can see a day in the not-too-distance future where search takes a backseat to the social graph. Links between pages will give way to relationships between people, including context like location. What does this mean for retail? It means your e-commerce strategy will slowly transition to an f-commerce strategy. Remember when a large portion of the online population was held captive inside the walls of AOL? All the commercials listed an AOL keyword, not a web address because that's where the majority of people surfed. Now, people are spending a huge amount of time in Facebook (despite Betty White's proclamation that its a big waste of time). According to Facebook, users spend 500 billion minutes per month on the site. Selling products where consumers are spending their time makes sense. The power of Like and Share are the most effective approach to marketing. More and more stores are popping up on Facebook, and soon they will be the front-end to e-commerce systems. As sites adopt the Facebook Open Graph API, users will have a harder time distinguishing the open web from their Facebook experience, including shopping. Of course e-commerce sites won't go away, but a large portion of their traffic will emanate from Facebook and in some cases Facebook will act as the front-end for the web store. Ignore Facebook Open Graph at your peril. In a Mashable article, Mitchell Harper made several predictions about how e-commerce will change based on Facebook. His five points are not far-fetched at all, so we need to watch this space carefully.

    Read the article

  • IASA South East Florida Chapter Meeting Recap - June 2011

    - by Sam Abraham
    Erik Russell and Giles Marino were our speakers for the June 2011 IASA South East Florida Chapter meeting.    Attendees filled all available seats at the Microsoft office conference room where the event was held. This highlights the high interest in Enterprise Architecture as a career track and chartered project role. Also in attendance were our Board of Directors and Alex Funkhouser, President, Sherlock Technology.   Rainer Habermann, Chapter President, kicked off the meeting by introducing our speakers and Board of Directors.   Alex Funkhouser, President of South Florida’s staffing firm Sherlock Technology spoke briefly about available Software Architect positions in the area. Alex also congratulated and presented this week’s Sherlock Raffle winner with $500 in cash.   Our speakers Giles and Erik then proceeded with their talk. Erik presented a business case in the government sector where Enterprise Architecture helped a government entity cut costs and streamline its various business operations. Technologies leveraged in Erik’s demonstrated project were Java-based.   Giles then followed with a thorough demonstration of the Architecture patterns he used to migrate a complete backend system for an insurance company to the .Net Platform.   Audience was very engaged with our speakers as evidenced by the large number of follow-up questions asked at the end of the talk.   We greatly enjoyed Giles and Erik’s talk and look forward to having them share with us more of their adventures as Enterprise Architects in the near future.   Below are some photos of the event.   Sam Abraham Secretary- IASA South East Florida Chapter. http://www.iasaglobal.org/iasa/South_East_Florida.asp Chapter President - Rainer Habermann kicks off our meeting.   Sherlock Technology President Alex Funkhouser holding Sherlock's weekly cash prize. Alex shares available Software Architect opportunities with our members Erik Russell addressing our membership Giles Marino sharing his architecture experience in the insurance industry In this photo: Dave Noderer, Rainer Habermann, Quent Herschelman and Alex Funkhouser. Event attracted a large audience and filled the Microsoft conference room where it was held

    Read the article

  • Refactoring in domain driven design

    - by Andrew Whitaker
    I've just started working on a project and we're using domain-driven design (as defined by Eric Evans in Domain-Driven Design: Tackling Complexity in the Heart of Software. I believe that our project is certainly a candidate for this design pattern as Evans describes it in his book. I'm struggling with the idea of constantly refactoring. I know refactoring is a necessity in any project and will happen inevitably as the software changes. However, in my experience, refactoring occurs when the needs of the development team change, not as understanding of the domain changes ("refactoring to greater insight" as Evans calls it). I'm most concerned with breakthroughs in understanding of the domain model. I understand making small changes, but what if a large change in the model is necessary? What's an effective way of convincing yourself (and others) you should refactor after you obtain a clearer domain model? After all, refactoring to improve code organization or performance could be completely separate from how expressive in terms of the ubiquitous language code is. Sometimes it just seems like there's not enough time to refactor. Luckily, SCRUM lends it self to refactoring. The iterative nature of SCRUM makes it easy to build a small piece and change and it. But over time that piece will get larger and what if you have a breakthrough after that piece is so large that it will be too difficult to change? Has anyone worked on a project employing domain-driven design? If so, it would be great to get some insight on this one. I'd especially like to hear some success stories, since DDD seems very difficult to get right. Thanks!

    Read the article

  • Don’t miss the Oracle Webcast: Enabling Effective Decision Making with “One Source of the Truth” at BB&T

    - by Rob Reynolds
    Webcast Date:  September 17th, 2012  -  9 a.m. PT / 12 p.m. ET  BB&T Corporation (NYSE: BBT) is one of the largest financial services holding companies in the United States. One of their IT goals is to provide “one source of truth” to enable more effective decision making at the corporate and local level. By using Oracle’s Hyperion Enterprise Planning Suite and Oracle Essbase, BB&T streamlined their planning and financial reporting processes. Large volumes of data were consolidated into a single reporting solution giving stakeholders more timely and accurate information. By providing a central and automated collaboration tool, BB&T is able to prepare more accurate financial forecasts, rapidly consolidate large amounts of data, and make more informed decisions. Join us on September 17th for a live webcast to hear BB&T’s journey to achieve “One Source of Truth” and learn how Oracle’s Hyperion Planning Suite and Oracle’s Essbase allows you to: Adopt best practices like rolling forecasts and driver-based planning Reduce the time and effort dedicated to the annual budget process Reduce the time and effort dedicated to the annual budget process Remove forecasting uncertainty with predictive modeling capabilities Rapidly analyze shifting market conditions with a powerful calculation engine Prioritize resources effectively with complete visibility into all potential risks Link strategy and execution with integrated strategic, financial and operational planning Register here.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >