分类 开发 下的文章

WAVE PCM soundfile format
The WAVE file format is a subset of Microsoft's RIFF specification for the storage of multimedia files. A RIFF file starts out with a file header followed by a sequence of data chunks. A WAVE file is often just a RIFF file with a single "WAVE" chunk which consists of two sub-chunks -- a "fmt " chunk specifying the data format and a "data" chunk containing the actual sample data. Call this form the "Canonical form". Who knows how it really all works. An almost complete description which seems totally useless unless you want to spend a week looking over it can be found at MSDN (mostly describes the non-PCM, or registered proprietary data formats).

I use the standard WAVE format as created by the sox program:
1.jpg
Offset Size Name Description

The canonical WAVE format starts with the RIFF header:

0 4 ChunkID Contains the letters "RIFF" in ASCII form

                           (0x52494646 big-endian form).

4 4 ChunkSize 36 + SubChunk2Size, or more precisely:

                           4 + (8 + SubChunk1Size) + (8 + SubChunk2Size)
                           This is the size of the rest of the chunk 
                           following this number.  This is the size of the 
                           entire file in bytes minus 8 bytes for the
                           two fields not included in this count:
                           ChunkID and ChunkSize.

8 4 Format Contains the letters "WAVE"

                           (0x57415645 big-endian form).

The "WAVE" format consists of two subchunks: "fmt " and "data":
The "fmt " subchunk describes the sound data's format:

12 4 Subchunk1ID Contains the letters "fmt "

                           (0x666d7420 big-endian form).

16 4 Subchunk1Size 16 for PCM. This is the size of the

                           rest of the Subchunk which follows this number.

20 2 AudioFormat PCM = 1 (i.e. Linear quantization)

                           Values other than 1 indicate some 
                           form of compression.

22 2 NumChannels Mono = 1, Stereo = 2, etc.
24 4 SampleRate 8000, 44100, etc.
28 4 ByteRate == SampleRate NumChannels BitsPerSample/8
32 2 BlockAlign == NumChannels * BitsPerSample/8

                           The number of bytes for one sample including
                           all channels. I wonder what happens when
                           this number isn't an integer?

34 2 BitsPerSample 8 bits = 8, 16 bits = 16, etc.

      2   ExtraParamSize   if PCM, then doesn't exist
      X   ExtraParams      space for extra parameters

The "data" subchunk contains the size of the data and the actual sound:

36 4 Subchunk2ID Contains the letters "data"

                           (0x64617461 big-endian form).

40 4 Subchunk2Size == NumSamples NumChannels BitsPerSample/8

                           This is the number of bytes in the data.
                           You can also think of this as the size
                           of the read of the subchunk following this 
                           number.

44 * Data The actual sound data.

As an example, here are the opening 72 bytes of a WAVE file with bytes shown as hexadecimal numbers:

52 49 46 46 24 08 00 00 57 41 56 45 66 6d 74 20 10 00 00 00 01 00 02 00
22 56 00 00 88 58 01 00 04 00 10 00 64 61 74 61 00 08 00 00 00 00 00 00
24 17 1e f3 3c 13 3c 14 16 f9 18 f9 34 e7 23 a6 3c f2 24 f2 11 ce 1a 0d
Here is the interpretation of these bytes as a WAVE soundfile:
2.JPG
Notes:
The default byte ordering assumed for WAVE data files is little-endian. Files written using the big-endian byte ordering scheme have the identifier RIFX instead of RIFF.
The sample data must end on an even byte boundary. Whatever that means.
8-bit samples are stored as unsigned bytes, ranging from 0 to 255. 16-bit samples are stored as 2's-complement signed integers, ranging from -32768 to 32767.
There may be additional subchunks in a Wave data stream. If so, each will have a char[4] SubChunkID, and unsigned long SubChunkSize, and SubChunkSize amount of data.
RIFF stands for Resource Interchange File Format.
General discussion of RIFF files:
Multimedia applications require the storage and management of a wide variety of data, including bitmaps, audio data, video data, and peripheral device control information. RIFF provides a way to store all these varied types of data. The type of data a RIFF file contains is indicated by the file extension. Examples of data that may be stored in RIFF files are:
Audio/visual interleaved data (.AVI)
Waveform data (.WAV)
Bitmapped data (.RDI)
MIDI information (.RMI)
Color palette (.PAL)
Multimedia movie (.RMN)
Animated cursor (.ANI)
A bundle of other RIFF files (.BND)
NOTE: At this point, AVI files are the only type of RIFF files that have been fully implemented using the current RIFF specification. Although WAV files have been implemented, these files are very simple, and their developers typically use an older specification in constructing them.

几个月没有更新博客了,之前,大佬陆陆续续将项目需求拿过来,内容涵盖有点大,目标用户群也是比较大,最终另选了一套工具来进行开发——Vert.X。
好处嘛:

①全异步模式,在分布式环境和集群环境的网络IO比较慢的情况下,可以达到比较好的性能,提高并发性能;
②支持多语言开发,研发团队是半路转到互联网开发的,C\C++、Python、Java、Go……都有会的,唯一都会的C\C++做WEB开发,不是一般的麻烦,毕竟像JAVA发展了多年的互联网开发,积累了不少的东西,一旦用起来很舒服,比如这个vertx;
③对分布式和集群的支持非常好;

为了学习和使用,自己搞一个小项目NAS,用于管理本站的一些文件,最终达成将本站所有内容用这个项目替代(至于能否实现得看有没有足够的空余时间)。废话不多说,开搞:
1、首先需要创建vertx对象,用于启动vertx的整个核心逻辑,通常在Main函数中;

package com.igtsys;

import com.igtsys.verticle.WebVerticle;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.dns.AddressResolverOptions;
import io.vertx.core.logging.Logger;
import io.vertx.core.logging.LoggerFactory;

public class Launcher {
    private static final Logger LOG = LoggerFactory.getLogger(Launcher.class); //启动器的日志类,备用
    
    public static void main(String[] args) {//主函数
        Vertx.clusteredVertx(new VertxOptions()//创建集群型Vertx
            .setWorkerPoolSize(20)//配置有多少个Worker线程,Worker线程用于跑一些阻塞费时型任务
            .setBlockedThreadCheckInterval(1000000)//配置检测event bus线程的检查间隔,避免在调试时出现一堆超时警告;
            .setAddressResolverOptions(new AddressResolverOptions()
                .addServer("172.18.8.1").addServer("172.18.8.2")//配置本地DNS服务器
                .addServer("61.139.2.69").addServer("202.98.96.68")), res -> {//回调函数
            if (res.succeeded()) {//创建成功后,运行WebVerticle
                res.result().deployVerticle(WebVerticle.class, new DeploymentOptions().setInstances(1));//WebVerticle中需要做端口监听,一个Verticle就够了,所以设置实例个数为1
            } else {
            }
        });
    }
}

2、创建WebVerticle,用于处理web简单业务

package com.igtsys.verticle;

import io.vertx.core.AbstractVerticle;
import io.vertx.core.http.HttpServer;
import io.vertx.core.http.HttpServerOptions;
import io.vertx.core.logging.Logger;
import io.vertx.core.logging.LoggerFactory;
import io.vertx.core.net.JksOptions;
import io.vertx.ext.web.Router;
import io.vertx.ext.web.handler.BodyHandler;
import io.vertx.ext.web.handler.CookieHandler;
import io.vertx.ext.web.handler.StaticHandler;


public class WebVerticle extends AbstractVerticle {
    private static final Logger LOG = LoggerFactory.getLogger(WebVerticle.class);
    
    private HttpServer https_server;
    private HttpServer http_server;

    @Override
    public void start() throws Exception {
        http_server = vertx.createHttpServer(new HttpServerOptions()
                                                .setHost("0.0.0.0")
                                                .setPort(80));
        http_server.requestHandler(res->{
            String url = res.absoluteURI();
            url = url.replaceFirst("http", "https");
            res.response().setStatusCode(301).putHeader("Location", url).end();
        });
        http_server.listen();
        //=====================================================================
        HttpServerOptions cfg_http_server = new HttpServerOptions()
                .setHost("0.0.0.0")
                .setPort(443)
                .setSsl(true)
                .setKeyStoreOptions(new JksOptions()
                        .setPath("server-keystore.jks")
                        .setPassword("secret"));
        
        https_server = vertx.createHttpServer(cfg_http_server);
        Router router = Router.router(vertx);

        router.route().handler(BodyHandler.create())
                      .handler(CookieHandler.create());
        router.route().handler(StaticHandler.create()
                                .setWebRoot("./webroot")
                                .setDefaultContentEncoding("UTF-8"));
        
        https_server.requestHandler(router);
        https_server.listen(res->{
            if (res.succeeded()) {
                
            } else {
                
            }
        });
    }

    @Override
    public void stop() throws Exception {
        https_server.close();
        http_server.close();
    }

}

因需项目对并发要求比较高、提高Tomcat效率、使用tomcat apr模式。在Spring boot中内嵌的Tomcat默认启动开启的是NIO模式,这里如果我们要在linux内核的系统上使用APR模式,那么需要安装一些lib库

1)openssl,需要版本大于1.0.2,如果不使用https openssl也可以不安装,就是在启动的时候会报openssl的错误,直接忽视就可以了;
2)apr,可以去官网下载1.6.X最新版进行下载http://apr.apache.org/download.cgiapr-util,在同一个页面进行下载,最新版本为1.6.X版本tomcat-native,在tomcat中自带了安装包,可以在tomcat的bin目录下找到tomcat-native.tar;

具体安装步骤这里省略,网上一搜即可。

网上大部分讲解配置tomcat apr的文章,都只是讲了如何在独立tomcat服务上如何配置apr,只需要修改server.xml中的connnector 的protocol就可以了,对于spring boot会稍微复杂些,需要增加一个apr配置类在启动的时候修改Embed的tomcat connector网络接入协议。

import org.apache.catalina.core.AprLifecycleListener;
import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class AprConnecter {
    @Bean
    public TomcatServletWebServerFactory servletContainer(){
        TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory();
        tomcat.setProtocol("org.apache.coyote.http11.Http11AprProtocol");
        tomcat.addContextLifecycleListeners(new AprLifecycleListener());
        return tomcat;
    }
}

启动成功后会发现日志输出,或者在eclipse调试模式下发现已启用APR。
1.PNG